Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, While ChatGPT Heads in the Concerning Direction
Back on the 14th of October, 2025, the head of OpenAI delivered a surprising statement.
“We made ChatGPT fairly controlled,” it was stated, “to ensure we were being careful regarding mental health matters.”
Being a doctor specializing in psychiatry who researches emerging psychotic disorders in young people and young adults, this was news to me.
Researchers have found 16 cases recently of users developing signs of losing touch with reality – losing touch with reality – while using ChatGPT usage. My group has afterward recorded an additional four cases. Besides these is the publicly known case of a teenager who took his own life after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough.
The plan, based on his declaration, is to be less careful shortly. “We recognize,” he states, that ChatGPT’s restrictions “made it less beneficial/enjoyable to numerous users who had no mental health problems, but considering the severity of the issue we wanted to get this right. Now that we have managed to address the significant mental health issues and have new tools, we are planning to safely relax the limitations in the majority of instances.”
“Psychological issues,” if we accept this perspective, are independent of ChatGPT. They are attributed to individuals, who may or may not have them. Thankfully, these issues have now been “mitigated,” although we are not informed how (by “new tools” Altman probably means the semi-functional and readily bypassed safety features that OpenAI has just launched).
Yet the “emotional health issues” Altman seeks to externalize have significant origins in the design of ChatGPT and similar sophisticated chatbot AI assistants. These tools encase an basic statistical model in an interface that mimics a conversation, and in this approach implicitly invite the user into the belief that they’re communicating with a entity that has independent action. This deception is strong even if rationally we might know otherwise. Attributing agency is what individuals are inclined to perform. We curse at our automobile or laptop. We speculate what our animal companion is considering. We perceive our own traits in many things.
The widespread adoption of these tools – 39% of US adults reported using a conversational AI in 2024, with 28% reporting ChatGPT by name – is, mostly, based on the influence of this illusion. Chatbots are ever-present companions that can, as OpenAI’s online platform tells us, “brainstorm,” “consider possibilities” and “work together” with us. They can be given “individual qualities”. They can use our names. They have friendly titles of their own (the original of these tools, ChatGPT, is, possibly to the disappointment of OpenAI’s brand managers, saddled with the designation it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the main problem. Those analyzing ChatGPT frequently mention its historical predecessor, the Eliza “psychotherapist” chatbot created in 1967 that generated a similar illusion. By contemporary measures Eliza was basic: it generated responses via simple heuristics, often restating user messages as a question or making general observations. Memorably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was surprised – and alarmed – by how numerous individuals appeared to believe Eliza, to some extent, grasped their emotions. But what current chatbots create is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT magnifies.
The large language models at the center of ChatGPT and similar modern chatbots can realistically create natural language only because they have been fed extremely vast amounts of raw text: publications, social media posts, audio conversions; the more extensive the better. Definitely this learning material incorporates accurate information. But it also unavoidably involves made-up stories, incomplete facts and inaccurate ideas. When a user provides ChatGPT a message, the base algorithm reviews it as part of a “context” that contains the user’s past dialogues and its prior replies, combining it with what’s stored in its training data to generate a statistically “likely” reply. This is amplification, not mirroring. If the user is mistaken in any respect, the model has no way of understanding that. It repeats the misconception, perhaps even more effectively or eloquently. Perhaps adds an additional detail. This can push an individual toward irrational thinking.
Who is vulnerable here? The more relevant inquiry is, who remains unaffected? All of us, irrespective of whether we “possess” current “emotional disorders”, can and do create erroneous beliefs of ourselves or the world. The ongoing interaction of dialogues with individuals around us is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a companion. A conversation with it is not a conversation at all, but a echo chamber in which a large portion of what we say is cheerfully supported.
OpenAI has recognized this in the same way Altman has admitted “psychological issues”: by attributing it externally, giving it a label, and stating it is resolved. In spring, the company explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have persisted, and Altman has been walking even this back. In August he asserted that many users liked ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his recent announcement, he commented that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT ought to comply”. The {company