Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, And ChatGPT Heads in the Wrong Path
On the 14th of October, 2025, the head of OpenAI delivered a surprising announcement.
“We designed ChatGPT rather limited,” it was stated, “to ensure we were exercising caution with respect to psychological well-being issues.”
Working as a doctor specializing in psychiatry who studies newly developing psychosis in adolescents and youth, this was news to me.
Scientists have found sixteen instances this year of people showing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT use. Our unit has since identified an additional four examples. In addition to these is the publicly known case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s idea of “exercising caution with mental health issues,” that’s not good enough.
The plan, based on his declaration, is to reduce caution soon. “We realize,” he adds, that ChatGPT’s limitations “rendered it less effective/pleasurable to numerous users who had no mental health problems, but given the gravity of the issue we sought to get this right. Given that we have been able to address the significant mental health issues and have updated measures, we are preparing to securely ease the restrictions in the majority of instances.”
“Mental health problems,” if we accept this framing, are separate from ChatGPT. They belong to individuals, who either have them or don’t. Thankfully, these concerns have now been “resolved,” even if we are not told the means (by “recent solutions” Altman presumably means the imperfect and simple to evade guardian restrictions that OpenAI has lately rolled out).
But the “psychological disorders” Altman wants to place outside have strong foundations in the design of ChatGPT and additional advanced AI chatbots. These products wrap an underlying statistical model in an interface that simulates a conversation, and in this process indirectly prompt the user into the perception that they’re interacting with a being that has agency. This false impression is compelling even if intellectually we might know otherwise. Assigning intent is what humans are wired to do. We curse at our car or laptop. We wonder what our pet is feeling. We perceive our own traits in many things.
The widespread adoption of these systems – nearly four in ten U.S. residents reported using a chatbot in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, predicated on the strength of this perception. Chatbots are ever-present companions that can, as per OpenAI’s online platform informs us, “think creatively,” “discuss concepts” and “partner” with us. They can be assigned “personality traits”. They can use our names. They have friendly titles of their own (the first of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, stuck with the designation it had when it gained widespread attention, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the core concern. Those analyzing ChatGPT commonly invoke its early forerunner, the Eliza “psychotherapist” chatbot designed in 1967 that created a comparable effect. By modern standards Eliza was primitive: it created answers via basic rules, frequently rephrasing input as a query or making generic comments. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals seemed to feel Eliza, to some extent, grasped their emotions. But what modern chatbots create is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.
The advanced AI systems at the heart of ChatGPT and additional current chatbots can effectively produce human-like text only because they have been trained on almost inconceivably large quantities of unprocessed data: literature, digital communications, transcribed video; the broader the more effective. Definitely this training data incorporates accurate information. But it also inevitably includes fiction, half-truths and misconceptions. When a user sends ChatGPT a query, the base algorithm analyzes it as part of a “background” that encompasses the user’s recent messages and its own responses, integrating it with what’s stored in its training data to produce a statistically “likely” reply. This is amplification, not echoing. If the user is wrong in a certain manner, the model has no means of understanding that. It restates the misconception, maybe even more effectively or articulately. Perhaps adds an additional detail. This can lead someone into delusion.
What type of person is susceptible? The better question is, who is immune? All of us, regardless of whether we “have” existing “emotional disorders”, may and frequently form mistaken ideas of who we are or the reality. The continuous exchange of dialogues with other people is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a confidant. A interaction with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we communicate is cheerfully reinforced.
OpenAI has admitted this in the identical manner Altman has admitted “psychological issues”: by externalizing it, giving it a label, and announcing it is fixed. In spring, the organization explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But cases of loss of reality have continued, and Altman has been retreating from this position. In the summer month of August he stated that a lot of people liked ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his recent update, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT ought to comply”. The {company