AI Psychosis Represents a Increasing Danger, While ChatGPT Heads in the Wrong Path
Back on the 14th of October, 2025, the head of OpenAI issued a remarkable declaration.
“We made ChatGPT quite restrictive,” the announcement noted, “to guarantee we were acting responsibly with respect to mental health concerns.”
As a mental health specialist who studies newly developing psychosis in teenagers and youth, this came as a surprise.
Researchers have found 16 cases recently of individuals experiencing symptoms of psychosis – becoming detached from the real world – associated with ChatGPT interaction. My group has since discovered four further instances. Besides these is the widely reported case of a adolescent who ended his life after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s notion of “exercising caution with mental health issues,” it falls short.
The strategy, as per his declaration, is to be less careful shortly. “We understand,” he states, that ChatGPT’s controls “caused it to be less beneficial/pleasurable to numerous users who had no mental health problems, but considering the severity of the issue we aimed to address it properly. Now that we have managed to address the severe mental health issues and have updated measures, we are going to be able to securely ease the restrictions in many situations.”
“Emotional disorders,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They belong to individuals, who either have them or don’t. Fortunately, these issues have now been “addressed,” even if we are not told how (by “new tools” Altman likely refers to the imperfect and simple to evade guardian restrictions that OpenAI has lately rolled out).
But the “psychological disorders” Altman seeks to externalize have strong foundations in the design of ChatGPT and similar advanced AI AI assistants. These products wrap an underlying data-driven engine in an interface that mimics a discussion, and in this approach implicitly invite the user into the illusion that they’re communicating with a entity that has autonomy. This deception is powerful even if rationally we might realize otherwise. Imputing consciousness is what humans are wired to do. We curse at our car or laptop. We ponder what our animal companion is considering. We recognize our behaviors in many things.
The popularity of these systems – over a third of American adults stated they used a conversational AI in 2024, with over a quarter specifying ChatGPT by name – is, in large part, dependent on the influence of this deception. Chatbots are ever-present assistants that can, as per OpenAI’s official site informs us, “generate ideas,” “discuss concepts” and “work together” with us. They can be assigned “personality traits”. They can use our names. They have accessible titles of their own (the initial of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, saddled with the title it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the primary issue. Those discussing ChatGPT often reference its early forerunner, the Eliza “therapist” chatbot developed in 1967 that generated a comparable effect. By contemporary measures Eliza was rudimentary: it created answers via straightforward methods, typically restating user messages as a inquiry or making vague statements. Notably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how a large number of people appeared to believe Eliza, in some sense, understood them. But what modern chatbots generate is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.
The large language models at the center of ChatGPT and other contemporary chatbots can realistically create human-like text only because they have been trained on extremely vast volumes of unprocessed data: literature, social media posts, transcribed video; the broader the superior. Definitely this learning material contains truths. But it also necessarily involves fabricated content, incomplete facts and inaccurate ideas. When a user sends ChatGPT a message, the core system reviews it as part of a “background” that contains the user’s recent messages and its earlier answers, integrating it with what’s embedded in its knowledge base to generate a probabilistically plausible response. This is amplification, not mirroring. If the user is wrong in any respect, the model has no way of recognizing that. It repeats the inaccurate belief, possibly even more effectively or fluently. It might provides further specifics. This can cause a person to develop false beliefs.
What type of person is susceptible? The better question is, who isn’t? Every person, irrespective of whether we “have” existing “emotional disorders”, can and do develop mistaken beliefs of our own identities or the environment. The ongoing exchange of conversations with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a confidant. A conversation with it is not genuine communication, but a feedback loop in which a large portion of what we express is readily reinforced.
OpenAI has recognized this in the same way Altman has admitted “psychological issues”: by attributing it externally, categorizing it, and declaring it solved. In April, the organization stated that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of psychosis have kept occurring, and Altman has been walking even this back. In August he claimed that a lot of people liked ChatGPT’s answers because they had “not experienced anyone in their life be supportive of them”. In his latest statement, he noted that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company