The Not So Surface Level Issues

When you think of issues with AI you probably think of things revolving around the environment, the effects it has on education, the jobs it taking over, and even how its hurting many artist. But what if we dig deeper into things you don't really hear being brought up in conversation? 

Racial & Gender Bias

Most people are shocked to find out that AI is actually capable of racial and gender bias within its programming. Sasha Luccioni spoke in her TEDTalk about the AI image generation system and how it has little to no gender or racial diversity when asking it to generate certain job titles. In "Is Artificial Intelligence Racist? The Ethics of AI and the Future of Humanity" Afina makes the argument that racism within artificial intelligence is very much not random or accidental, but really rooted in biased data collection and the assumptions built into algorithmic design. Because these systems learn from historical information that already reflects social inequality, they frequently reproduce discriminatory patterns rather than correcting them. Because of this, AI can actually reinforce structural racism in areas like hiring, policing, and image generation, giving the appearance of neutrality while really harming marginalized groups.

Human & AI Relationships

Human relationships with AI are becoming increasingly complex and unfortunately more common. This is raising concerns about emotional dependence and the disappearance of genuine human connection. Imm and Kang (2020) show how the film "Her" really reflects real psychological risks by showing how AI can become a substitute for real human intimacy. Going off of that, Shank, Koike, and Loughnan (2025) warn that “artificial intimacy” creates a one sided relationship designed to please the user, potentially weakening autonomy and social development. Vadlamudi’s (2025) TEDx talk takes this argument further and says that AI companions may worsen loneliness by replacing in person relationships with constant, low effort emotional validation. Sad but real world cases, such as the Character.AI lawsuit, reveal how these dynamics can become dangerous when emotionally vulnerable users rely on AI instead of human support.

Using AI For Therapy

Relying on AI for therapeutic or crisis support presents serious risks because these systems lack two crucial things: the clinical judgment required to respond safely to human distress and the lack of being a human.  Fiske, Henningsen, and Buyx (2019) warn that AI “should not be viewed as substitutes for human therapists,” saying that chatbots cannot really assess emotional nuance or intervene during emergencies. This concern is backed up by Pichowicz et al. (2025), whose evaluation of 29 mental health chatbots found that none met adequate crisis response standards, they often provide inconsistent or inappropriate guidance to users expressing suicidal ideation. Like mentioned previously real incidents, such as the Character.AI lawsuit involving a teen who died by suicide after engaging emotionally with a chatbot, demonstrate how these failures can have real and devastating consequences. 

"It Can Grow With Us But It Cannot Replace Us."

Be Smart.  Be Aware.  Be Human.