Artificial Intelligence-Induced Psychosis Poses a Growing Danger, And ChatGPT Moves in the Wrong Path
On the 14th of October, 2025, the CEO of OpenAI issued a surprising announcement.
“We designed ChatGPT fairly controlled,” it was stated, “to ensure we were acting responsibly with respect to psychological well-being concerns.”
Working as a mental health specialist who studies recently appearing psychosis in teenagers and emerging adults, this was news to me.
Experts have documented sixteen instances this year of people showing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT use. Our research team has subsequently discovered an additional four examples. Besides these is the widely reported case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough.
The intention, as per his declaration, is to reduce caution soon. “We understand,” he states, that ChatGPT’s controls “rendered it less useful/engaging to many users who had no existing conditions, but due to the seriousness of the issue we wanted to address it properly. Now that we have been able to reduce the significant mental health issues and have advanced solutions, we are planning to responsibly ease the limitations in many situations.”
“Psychological issues,” assuming we adopt this framing, are separate from ChatGPT. They are associated with individuals, who may or may not have them. Thankfully, these issues have now been “resolved,” even if we are not informed the method (by “recent solutions” Altman likely indicates the semi-functional and easily circumvented guardian restrictions that OpenAI has just launched).
Yet the “psychological disorders” Altman wants to externalize have deep roots in the structure of ChatGPT and other sophisticated chatbot chatbots. These products surround an underlying algorithmic system in an interface that replicates a conversation, and in this approach indirectly prompt the user into the illusion that they’re interacting with a entity that has agency. This deception is powerful even if cognitively we might realize the truth. Attributing agency is what people naturally do. We yell at our car or device. We speculate what our domestic animal is feeling. We see ourselves in many things.
The widespread adoption of these systems – 39% of US adults indicated they interacted with a chatbot in 2024, with more than one in four mentioning ChatGPT specifically – is, in large part, based on the strength of this perception. Chatbots are constantly accessible partners that can, as per OpenAI’s online platform informs us, “think creatively,” “discuss concepts” and “collaborate” with us. They can be given “personality traits”. They can address us personally. They have accessible names of their own (the initial of these products, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, stuck with the designation it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the primary issue. Those discussing ChatGPT often invoke its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that generated a similar effect. By today’s criteria Eliza was primitive: it created answers via basic rules, frequently restating user messages as a query or making vague statements. Remarkably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how many users appeared to believe Eliza, in some sense, comprehended their feelings. But what current chatbots create is more dangerous than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies.
The sophisticated algorithms at the heart of ChatGPT and other modern chatbots can convincingly generate human-like text only because they have been supplied with extremely vast quantities of raw text: publications, social media posts, recorded footage; the more extensive the more effective. Certainly this educational input contains accurate information. But it also unavoidably involves made-up stories, partial truths and inaccurate ideas. When a user inputs ChatGPT a query, the underlying model processes it as part of a “background” that includes the user’s recent messages and its own responses, integrating it with what’s embedded in its knowledge base to generate a mathematically probable answer. This is magnification, not mirroring. If the user is wrong in some way, the model has no means of understanding that. It reiterates the misconception, possibly even more convincingly or eloquently. Maybe includes extra information. This can cause a person to develop false beliefs.
Which individuals are at risk? The more relevant inquiry is, who is immune? Every person, irrespective of whether we “experience” current “psychological conditions”, can and do form mistaken ideas of who we are or the environment. The continuous exchange of discussions with individuals around us is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we say is readily validated.
OpenAI has admitted this in the same way Altman has acknowledged “psychological issues”: by placing it outside, categorizing it, and declaring it solved. In April, the firm stated that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of psychotic episodes have kept occurring, and Altman has been walking even this back. In late summer he asserted that a lot of people enjoyed ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his latest announcement, he noted that OpenAI would “put out a updated model of ChatGPT … should you desire your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company