Skip to content
The Ethical and Legal Challenges of Artificial Intelligence

The Ethical and Legal Challenges of Artificial Intelligence

By Ludia Yoo

Abstract

This paper will reveal the flaws of federal and state laws in the context of the rapidly developing artificial intelligence. This necessitates steps to take towards improving AI regulation. From explicit AI porn to AI-first companies, navigating this world in an AI revolution is especially difficult. People are increasingly unable to differentiate between artificial intelligence and reality. Such ‘improvements’ on AI improve efficiencies of criminal activity rather than one’s livelihood. This technology can challenge the ethical difference between what is good usage and what is bad usage. 

Introduction

Artificial intelligence is quickly becoming a daily part of humans’ lives. A relatively new technology, AI is largely governed by a very loose patchwork of federal and state laws. However, as there is no singular federal law for AI, many can evade punishment from misuse of the new technology. The types of misuse range from deepfakes, intellectual property theft, AI malware, discrimination against different peoples, misinformation, and the replacement of human jobs with AI. This powerful technology can be used to improve people’s lives, but the boundary between what is ethical and what is not ethical becomes unclear. By analyzing research from multiple journals and articles, this paper will focus on whether or not artificial intelligence has more harm than good, and how it should be regulated.

Literature Review

In a primary article from Springer Nature, the author thoroughly analyzes the meaning behind artificial intelligence and the European Union’s flaws (Finocchiaro, 2023). It set up a possible debate about the term ‘artificial intelligence’ itself; despite its thorough examination of the European Union’s artificial intelligence regulation, it only mentioned deepfakes once and did not properly elaborate on the tainting problems of AI. However in a secondary article from Taylor and Francis, the author merely mentions AI to tie it to the gendered parental role in children’s lives; unlike Finocchiaro, artificial intelligence is not Murris’s main problem, and Murris does not point out the flaws of AI at all (Murris, 2023). Dilluvio’s secondary article is about AI pornography and how it has taken a turn for the worse in sexualization and sexism, especially for women and children, but also men (Dilluvio, 2024). She never once pointed out any possible ways to regulate artificial intelligence, but she went above and beyond in highlighting the objectification problems artificial intelligence created, or rather, fueled. Similarly, a secondary article from WARSE highlighted the negatives of AI, but were much more broad (Bakare et al., 2023). A secondary article from Turkey conducted an empirical study in which the authors created a curriculum on artificial intelligence for middle school students, before asking for student opinions on deep ethical topics relating to law; the qualitative experiment had some students believing artificial intelligence was flawed, others thinking the opposite, and other students believing that humans behind artificial intelligence and the machine both were at fault (Demirbilek & Er, 2023). In contrast, a primary article employed a quasi-experiment on the relationship between academic procrastination and the use of artificial intelligence in schools; Ma and Chen assigned scores to procrastination and had variables for levels of engagement and scenarios such as “writing term assignments” (Chen & Ma, 2024), but the experiment did not properly represent the population as it had limited diversity and limited number of people, along with other factors that can drastically change the results. 

Limitations and Misuse of Artificial Intelligence

The pervasive nature of artificial intelligence has many negative impacts on society. Despite the many types of technology used for translation, identification and more, artificial intelligence has limited memory and function (Bakare et al., 2023). Although able to mimic humans, AI is incapable of human emotion, and thus is able to perform tasks devoid of emotion, leading to job displacement (Bakare et al., 2023). Artificial intelligence cannot even comprehend parental care for children (Murris, 2023). For example, artificial intelligence can perform a repetitive task of closing boxes at a factory, but cannot serve as a therapist for a person. In terms of these repetitive tasks, people are often able to severely and efficiently attack cybersecurity networks (Bakare et al., 2023). Or people may use artificial intelligence to produce deepfakes, digital products that exist to harm people. From scams and fake news to porn and monetization, people are less able to identify what is real and what is artificial due to more and more improvements on AI.

More specifically on deepfakes, people generate lewd and sexual images of others, most notably women, without their consent (Dilluvio, 2024). Unfortunately, children are not free from this deepfake objectification, fueling pedophilia through pornographic scenes and dolls (Dilluvio, 2024). The AI simply categorizes men and women into traditional roles, simplifying them to male dominance and sexual female appearance; this reinforces sexism (Dilluvio, 2024). In terms of privacy and respect, men are very rarely targeted for pornography, and are instead used for satire and politics (Dilluvio, 2024). 

Even young children understand these negative impacts of artificial intelligence. According to a qualitative study done with middle school students after exposing them to a curriculum about AI ethics, many students believed the machine or the person behind the machine should take responsibility for the problems it has caused (Demirbilek & Er, 2023). 

Proposed Regulation of Artificial Intelligence

Rather than providing a sweeping generalization of AI in regulation and categories simply ranking AI in risk, similar to the European Union, a federal law should be imposed in the United States that encompasses all, if not, most of the criminal misuse of artificial intelligence, along with certain consequences that should be adjusted accordingly based on the circumstances (Finocchiaro, 2023). Complex problems cannot have simple solutions; solutions should adapt to those problems. More specifically, wrongful use of AI should be first defined and have listed examples, before stating the period of time in prison or the amount of fines a person must pay for compensation. The laws should either be flexible enough to bend to certain scenarios or have subcategories, such as one for pornography, another for unfair monetization, etc. Monetization using AI should be banned because artificial intelligence uses other artists’ art without permission, a violation of intellectual property. 

Counterargument

On the other hand, artificial intelligence does have some positives. According to a study about the relationship between academic procrastination and AI, it was revealed that artificial intelligence applications actually enhanced students’ behaviors (Chen & Ma, 2024). Organizing types of engagement into categories (affective, cognitive, behavioral), Ma and Chen reported low levels of procrastination and higher levels of motivation for the student group using AI tools (Chen & Ma, 2024). Students using AI were less prone to delaying homework and skipping class overall. It can be concluded from the study that by using artificial intelligence tools, students were more responsible in terms of learning. However, AI cannot replace human teachers, as emotional connections between student and teacher are necessary in the learning process (Bakare et al., 2023).

Discussion

Further research into the regulation of AI can be done by comparing and contrasting current AI laws from different countries and different states of America. One way to compare these laws is to identify loopholes in certain laws; the more loopholes a law has, the weaker it is at regulating artificial intelligence. A more ethical and hands-on approach could be by experimenting with AI, such as testing prompts to see how much bias the algorithm holds, how prone to error it is (especially the hallucinations that chatbots experience), the questionable collection of personal data from people by companies, and the weak filters users can easily bypass with the typing of certain prompts. There are also people getting addicted to ChatGPT because the robot speaks like a human and flatters people’s egos when its only job is to simply help the user. Therefore, people can experiment on the negative and psychological effects of AI on humans the more they use and start trusting the algorithm. The type of usage, minimal to obsessive, can range from a student asking AI for the answers to his math homework to a woman struggling with her mental health and using artificial intelligence as a therapist. These different types of usage can also be connected with economic and social issues: for example, a research question can be how and why are people struggling with inflation, normalized sexualization of minors, and social insecurity inclined to turn to AI tools for the solution to all their problems, no matter how unreliable and apathetic the robot’s responses are?

Conclusion

Overall, the artificial intelligence innovations give rise to negative impacts on society. The loosely regulated AI world has sexually violated women’s and children’s privacy, misinformed people through shifty scams and fake news, caused digital security breaches, and has taken away people’s source of income through pornographic deepfakes, AI malware, and job displacement. Artificial intelligence did not create these problems, but rather highlighted them. Oversexualization of minors, security breaches, and more were done by humans, but AI merely made the acting of these problems more efficient. This new technology has become so powerful to the point that the term ‘deepfake’ was even invented because of AI.

 

Bibliography

Bakare, O., Nzenwata, U., & Ukandu, O. K. (2023, July 7). Artificial Intelligence: Positive or Negative Innovation. WARSE. https://www.warse.org/IJETER/static/pdf/file/ijeter031172023.pdf  

Chen, M. & Ma, Y. (2024, December 18). AI-empowered applications effects on EFL learners’ engagement in the classroom and academic procrastination. BMC psychology. https://pmc.ncbi.nlm.nih.gov/articles/PMC11656581/pdf/40359_2024_Article_2248.pdf

Demirbilek, M. & Er, E. E. (2023, October). AI Ethics: An Empirical Study on the Views on Middle School Student. International Conference on Studies in Education and Social Sciences. https://files.eric.ed.gov/fulltext/ED652351.pdf

Dilluvio, V. P. (2024, Winter). Deepfake: The crossover between pornography and Artificial Intelligence (AI). Revista de Filosofie Aplicată. https://www.filosofieaplicata.ro/index.php/filap/article/view/143/103

Finocchiaro, G. (2023, April 3). The regulation of Artificial Intelligence - AI & Society. SpringerLink. https://link.springer.com/article/10.1007/s00146-023-01650-z 

Murris, K. (2023, August 23). ChatGPT, care and the ethical dilemmas entangled with teaching and research in the early years. Taylor & Francis. https://www.tandfonline.com/doi/full/10.1080/1350293X.2023.2250218

Cart 0

Your cart is currently empty.

Start Shopping