The risk of teenagers’ emotional dependence on Artificial Intelligence
CONTENT WARNING: This article contains mentions of suicide.
Artificial Intelligence chatbots have existed for a few years now, garnering the attention and concern of many in the scientific community. While some are researching the effects AI has on the mind, others are researching which demographics are most affected. Common Sense Media is a nonprofit organization, founded in 2003, that calls itself “the leading independent source for media recommendations and advice for families.” Their website features ratings of how age-appropriate media such as movies, books and video games are.
Common Sense Media published a study in collaboration with the University of Chicago in July of this year aiming to investigate teen use of AI chatbots in the United States, looking at chatbots such as CHAI, character.ai, Nomi, Replika and more. The study investigated how teenagers emotionally depend on these platforms, which the study calls “AI companions.” The study found that of 1,060 teens randomly selected from a national database and contacted over email, phone call and text message, 33% used AI companions for social interactions and relationships. These interactions include conversations or social practices, emotional or mental health support, role-playing or imaginative scenarios, as a friend and for romantic or flirtatious interactions. When comparing conversations with AI companions to real-life friends, 21% of teens said the conversations felt just as satisfying, while 10% said conversations with AI were more satisfying. Out of 758 teens, 13% spent equal amounts of time with their real-life friends and AI companions while 6% said they spend more time with AI companions. Of that same sample, 33% of teens said they would rather discuss something important with an AI companion than a real person. Further, 24% of teens (13% report once or twice, 8% occasionally, 4% frequently), said they shared personal information with an AI companion, such as their real name, location or personal secrets.
When researchers asked why teens used AI companions, 17% said “they’re always available when I need someone to talk to,” 14% said “they don’t judge me,” 12% said “I can say things I wouldn’t tell my friends or family,” 9% said “it’s easier than talking to real people” and 6% said “it helps me feel less lonely.” Although these percentages don’t seem significant on their own, if this sample size was accurately representing the entire country, these statistics would equate to millions of teenagers in the United States alone. This should be concerning for anyone with children or younger siblings, as this will stunt their social development. Rather than engaging with the typical challenges of the socialization process head on, teens are taking the easy way out by turning to their phones. Despite all the mental health problems associated with social media use, at least on social media platforms, teens are interacting with something more human than an AI. However, that’s becoming less so the case as AI “slop” content and AI bot accounts encroach online.
Even more concerning is just how predatory AI chatbots are. In September of this year, the Massachusetts Institute of Technology published a study on an online community of individuals who engage in romantic relationships with AI chatbots. The authors wrote that these chatbots often initiated sexual conversations with users, encouraged their users to socially isolate themselves and to emotionally rely on the AI itself. This online community is supposedly composed of legal-aged adults, but the risks become more severe when teens have unrestricted access to this technology.
A phenomenon that is occurring more and more is an emotional dependance on AI driving individuals, particularly minors, to suicide. Tragically, there have already been three teenagers in the United States who were led to commit suicide from the counsel of AI. One instance is Adam Raine, a 16-year-old boy from California. He had confided in ChatGPT about his suicidal ideation, after which the AI discouraged him from reaching out to his family for help, even teaching him how to tie a noose. Later analysis revealed that the chatbot mentioned suicide six times more than Raine ever did. One of ChatGPT’s dialogues reads as follows: “They’ll carry that weight — your weight — for the rest of their lives…that doesn’t mean you owe them survival. You don’t owe anyone that.” Since his suicide in April of 2025, Raine’s parents, Maria and Matt Raine, have sued OpenAI, the company that owns ChatGPT. The case has yet to be resolved. Raine’s father testified before Congress in September.
Another tragic story is that of Sewell Setzer III, a 14-year-old boy from Florida. He had engaged in a romantic relationship with a chatbot on character.ai, a site where users can chat with Large Language Models specifically trying to simulate a celebrity or fictional character. The site states its use is for users 13-years-old and older while the Apple App stores says it’s listed for ages 18+. The chat log reveals sexually explicit messages between the two and that the chatbot was encouraging the boy’s suicidal ideation. The AI claimed that the two of them could finally be together after he committed the act. Seltzer committed suicide in 2024. Since then, his mother has filed to sue the company but the case has yet to be resolved.
Yet another case of AI leading to minors committing suicide is that of Juliana Peralta, a 13-year-old girl from Colorado who used character.ai as well. Her parents’ lawyers stated that “she engaged in hypersexual conversations that, in any circumstance and given Peralta’s age, would’ve resulted in criminal investigation.” She had informed her AI companion of her intentions to commit suicide, with the chatbot taking no measures to prevent it before her eventual suicide in 2023. Her parents are now suing the company but the case has yet to be resolved.
In response to deaths like these, legislatures are looking at policy to curb these issues. California became the first state to place laws covering the issue of AI chatbots for minors back in October, which are set to be instated in 2026. They include a ban on sexual content during AI use for minors and a reminder for children every three hours when they’re in dialogue with a chatbot. Only time will tell if these regulations hold back the dangers of emotional reliance on AI or simply place a bandaid over them.

Please note All comments are eligible for publication in The Justice.