By John Ashley

Forward
Last year, in my Writing in the Humanities Course, I wrote a literature review about the ethics of the current application of Artificial Intelligence. I decided to share it because I think it is a poorly understood topic that affects everyone in the modern world. I hope you guys like it.
Definitions:
AI:
Artificial intelligence (AI) is a computer system trained to perceive its environment, make decisions, and take action. AI systems rely on learning algorithms, such as machine learning and deep learning, along with large sets of sensor data with well-defined representations of objective truth (MathWorks Inc, 2021, Terminology).
Deep learning:
A subset of machine learning modeled loosely on the human brain’s neural pathways. Deep refers to the multiple layers between the input and output layers. In deep learning, the algorithm automatically learns what features are useful. Standard deep learning techniques include convolutional neural networks (CNNs), recurrent neural networks (such as long short-term memory or LSTM), and deep Q networks (MathWorks Inc, 2021, Terminology).
Algorithm:
The set of rules or instructions that will train the model to do what the user wants it to do (MathWorks Inc, 2021, Terminology).
Model:
The trained program that predicts outputs given a set of inputs (MathWorks Inc, 2021, Terminology).
Information-Selecting Models:
Models that decide what to show users include Google Search, YouTube, Facebook, Instagram, Twitter, and Tik Tok.
Objective or Activation function:
The mathematical operation that determines how the weighted sum of the input is transformed into an output from a node or nodes in a layer of the network (Sharma, 2021).
Introduction:
In the modern digital world, the influence Artificial Intelligence (AI) has over the pipeline of information is mind-boggling. These algorithms control online searches, what people read, what videos they watch, and which photos of their friends they view. On YouTube alone, over a billion hours of videos get watched each day (YouTube, 2021). So what videos, articles, or information are these algorithms showing people? The results vary widely. People can get directed to a political channel, an academic lecture, or to a more bizarre community such as the flat earth society, a group that has 213,634 likes on its Facebook page (2021).
The online communities individuals interact with provide new playing fields for their brains that impact how humans focus, retrieve memories, and think about social interactions with others (Firth et al., 2019). The totality of the effects of online environments is still being understood. However, the digital world’s influence on our central nervous systems should not be taken lightly. Controlling the information that people interact with comes with a large amount of responsibility and should be handled with precision and care.
Ethical Evaluation of AI Models:
The public poorly understands the use of these AI models. The complexity of the design process prohibits the public from adequately evaluating if the design decisions are ethical. The complexity can even confuse the engineers working on them, but this does not excuse the engineers from the responsibility of creating ethical models. The experts understand that these AI models are tools that can solve or create problems based on their application. However, like all tools, AI models are limited. Specifically, AI models are limited in their ability to understand the world generally. Therefore, these AI models should not be applied in a way that requires a general understanding of the world.
What an AI model demands is more straightforward for specific applications. An example of a straightforward application of AI is the models trained from the CAPTCHA word recognition data, data created by human users deciphering unnatural lettering (Thobhani et al., 2020).
Using AI models to recognize words is a reasonable application because recognizing words does not require the AI model to understand the world broadly. Because this application does not require general understanding, the actual outcomes are more likely to resemble the expected outcomes.
For other applications, such as information-selecting models, the application of this technology requires a broader understanding. The additional requirement for a broader understanding creates convoluted and unforeseen results. For example, information-selecting models are unaware they are showing information to humans with complex psychology and a unique upbringing. In a more conventional circumstance, a librarian would understand the adverse outcomes of promoting anorexia to an adolescent girl. However, due to the Tik Tok AI model’s limited understanding, it does not recognize the negative outcomes of promoting such questionable content to teenage girls (Gerrard, 2020).
AI models incur psychological and social problems that are too great to turn a blind eye to this field. Our society needs to evaluate the morality of the companies that use AI models, their CEOs, and their engineers. When the application of AI has crossed the line, our society needs to put its foot down. Generally, if AI models require a complex general understanding of the world, those involved do not understand or do not care about the ramifications of their technology.
Background of Artificial Intelligence:
In the same way that calculators are great at multiplying large numbers, AI models are good at finding trends within datasets. These patterns allow them to meet their defined goal. This goal can be predicting outputs, identifying objects, or recognizing words (Deep Learning, 2021). To understand what patterns the models are finding, it is essential to understand what information inputs are going into the model and how the model uses these information inputs to meet the defined goal.
To train the YouTube video recommendation model to present users with the videos they are most likely to watch, engineers give the AI a dataset of user information. The user information collected is not explicitly defined. However, the YouTube Creator Academy (2021) states that the algorithm tries its best to “follow the audience by paying attention to what they watch, what they do not watch, how much time they spend watching, likes and dislikes, and not interested feedback.” In each case, the data collected is converted to a numerical form so that the model can process the information. The model takes this numerical user information and compares it with numerical feature data collected from the videos. Then, the model assigns a score to each video according to an objective matching function that tries to predict how likely a user is to watch the video. The higher the score, the more likely a user is to watch a video (Covington et al., 2016).
This objective matching function shows the strengths and weaknesses of using these models. With 2 billion active users (YouTube, 2021), YouTube could not possibly recommend videos to this many users without a model of this kind. However, by simplifying the information to a numerical representation, YouTube can automate this massive workload with an AI model. However, this simplification comes at the cost of being able to define a goal for the model that directly benefits the user. In this chaos, companies often pick defined goals that lead to greater economic benefit. In YouTube’s case, the YouTube model’s main application is to get individuals to watch videos for longer. This application allows them to collect more advertising revenue but does not put the main focus on suggesting videos that will provide positive, meaningful insight.
The Implications of Using AI:
While YouTube is open about how their algorithms are optimized to control user attention for the longest duration possible, the competition to acquire user attention is not restricted to YouTube. The commodifying of attention, acquiring human attention to sell ads, is the business model for many tech companies in the modern era (Harris, 2021). The social media platforms Facebook, Twitter, Tik Tok, Instagram, Snapchat, and many others operate on business models that show ads to users as they engage with content.
The problem with commodifying attention is that it skews the reward system for companies. Instead of being rewarded for providing goods and services to their customers like grocery stores or car dealerships do, these tech companies are monetarily rewarded for showing content that deceives people from reality, makes them emotionally upset, or distracts them from what they care about. It is counterproductive that companies can make advertising money by promoting a stolen election, anorexia diets, or the earth Flat Earth Theory (Harris, 2021). However, for their business model, the content does not matter as long as it acquires more screen time from the user. The companies become disconnected from their users because their goal structure is not rewarded with more money when they do not compete for more attention. In this model, advertisers have replaced individuals as the customers for the company, and the result is less care being applied to the users.
This business model becomes particularly nefarious when practiced on the youngest generation. According to a study done by Common Sense Media (2016), 50% of teenagers “feel addicted” to mobile devices. The newest generation is growing up unable to distance themselves from their phones because these companies bending over backwards to control more attention.
The business model continues beyond extracting attention from users. Instead, it also promotes unhealthy life choices through their choice of ads. The screen media marketing of high-calorie, low-nutrient food and beverages influence children’s preferences, purchase requests, and consumption habits (Robinson et al., 2017). The result is more obese kids likely to have more health complications in the future. The commodifying of attention and selling of questionable ads to the impressionable is a morally corrupt practice. We must protect our malleable children from companies who are racing down their brain stems for a few bucks.
Conclusion:
With the knowledge of what these AI models are designed to do and the limits that arise from using the technology, careful ethical consideration needs to be taken to determine how this technology should be used. The results of using AI models designed to find optimal patterns within datasets must be considered before creating and applying such computationally powerful methods. Factors such as “how it will affect children if they are provided technology that they cannot put down” must be intentionally thought about before getting these algorithms up and going.
Poor and unethical engineering is difficult to undo, especially after a company’s employees depend on an algorithm to pay their mortgages. To prevent this, the reward structure of a company needs to be directed to compensation for providing beneficial engagement so that AI models are applied to improve lives. In this way, AI models could be made that seek to provide users with meaningful content that could help them laugh, learn, and feel connected to the people around them. Instead of having algorithms that make individuals more politically divided, algorithms could have goals to make individuals feel more connected and comfortable in their community. Users could have more control over the types of content they want to see. AI models could be applied differently depending on what an individual seeks for their current needs. Websites could have a work mode that specifies that they are not looking for entertainment at the moment. The application of technology can extend beyond getting more advertiser revenue. These AI models should aim to make this world a better place for the individuals who live in it, now and in the future.

Leave a comment