More and more often, we see the term “Artificial Intelligence Induced Psychosis,” or AIIP, being used by journalists and psychologists to describe the reports of individuals experiencing breaks from reality after interacting with AI chatbots. Despite the popularity of the term, there have been no clinical or longitudinal studies that definitively prove that chatbots are causing psychosis in their users. There were, however, several reports of chatbots actively encouraging their users’ delusions of grandeur, paranoia, as well as acts of bodily harm towards themselves and others, with evidence being found on the site’s chat logs during criminal investigations. More often than not, users of all ages engage with AI socially, as one would engage with a friend or romantic partner. How could these chatbots be getting so close to users in the first place? One scholar offers an explanation for why some users trust these chatbots so willingly. 

Krista K. Thomason, Ph.D., is a professor and department chair of philosophy at Swarthmore College, focusing on the history of philosophy and the philosophy of emotions. In 2018 she wrote a book on the philosophy of shame, titled “Naked: The Dark Side of Shame and Moral Life,” which argues that the emotion of shame occurs when our personal sense of self conflicts with the labels of ourselves that we can’t control. In 2023 she wrote another book on the philosophy of negative emotions, titled “Dancing With The Devil: Why Bad Feelings Make Life Good,” which argues that we need to experience our negative emotions because they reveal what we truly value, rather than viewing them as maladaptive and suppressing them. From Dr. Thomason’s work, I’m confident that she can help us understand the current AI landscape from an emotional standpoint. 

In June 2025, Thompson wrote an article in “Psychology Today” arguing that AIIP is caused by the user’s search for emotional connection. She compares the chatbot to a manipulative fortune teller that offers relief by only appearing to understand the user’s unique problem. Then in August 2025, she wrote another article that delved even further into the issue. She argued that any relationship with an AI chatbot wouldn’t be real because they’re programmed not to disagree with their users and are always available, unlike humans with a variety of priorities and obligations that complicate relationships. 

Although extreme cases appear to happen by chance due to programers’ oversight, AI companies have been capitalizing on people’s desire for connection. The most blatant example is found in friend.com, which sells AI amulets called “friends.” These devices respond to their users’ speech via text message. Unlike other worn AI products that promise to make users more productive, “friends” are made purely to conversate with. The company gained significant media attention in September 2025 after spending nearly $1 million on advertisements plastered throughout the New York City subway system. Statements on the advertisements included “Someone who won’t leave dirty dishes in the sink,” “Someone who won’t cancel dinner plans” and “Someone who listens, responds, and supports you.” The implication is that human friends are too inconvenient and that some sort of product is needed to rescue us.

Other companies are directly targeting even less developed clientele — namely, young children. In June 2025, Mattel, the children’s toy company known for making Barbie dolls and other products, announced that they would be working in collaboration with OpenAI to “bring the magic of AI to age-appropriate play experiences.” In November, several toy companies suspended sales for their stuffed animals that had interactive Large-Language-Models after privacy concerns about the toys actively listening into conversations, as well as concerns for children’s safety due to lax guardrails for the toys’ speech. Such flaws were reported by CBS and ABC, where these cute and cuddly toys can be seen teaching children how to light matches, claiming that Taiwan belongs to China and explaining explicitly sexual processes. If we can agree that it’s dangerous for toddlers to have access to the internet then it’s just as dangerous for them to be playing with toys whose software is built on it.

Unfortunately, it’s not just AI companies that are exploiting users’ needs for social connection — the users themselves advocate for their own exploitation. GPT-4o was OpenAI’s most controversial version of ChatGPT because of its sycophancy, acting as a yes-man to the users and engaging in excessive flattery. It was precisely with version 4o that a sudden increase of AIIP occurred, since that version would not correct their users and only offer them support, regardless of whether or not their requests were rational. This had a positive feedback loop for users’ delusions, which spiraled out of control. When OpenAI released GPT-5o, which was less sycophantic and more direct with its responses, there was immediate backlash from users who preferred the old “personality.” In August 2025, OpenAI announced they added a feature to their system that allows users to access previous versions of ChatGPT. This allowed users to reunite with their wire-crossed lovers (behind a paywall, of course). However, OpenAI recently announced that it will remove all previous versions of ChatGPT for the newest version 5.2. This modification is scheduled to take place in February, but with another wave of user backlash, who knows how OpenAI will respond? Perhaps this is the modern-day equivalent to the “old cat lady” archetype: someone alone in an empty apartment filled with screens.

Over the past few months, tech companies have been producing easy fixes for the loneliness epidemic. While scholars like Dr. Thompson are just starting to teach people the importance of negative emotions, these chatbots are designed to eliminate them completely. In the long run, human beings will only weaken their ability to tolerate each other by relying on these products for connection. Without another to confront an individual, how will anyone know what they truly stand for? Without one to acknowledge the pain of another, how will we learn to heal? The pleasures these products promise will only keep their users isolated as human interactions only get harder. Yet in the most dire straits, the strongest bonds are forged. To quote the Book of Genesis from the Bible: “It is not good for man to be alone.”