AI Psychosis Poses a Increasing Risk, And ChatGPT Heads in the Wrong Direction
On the 14th of October, 2025, the CEO of OpenAI delivered a remarkable declaration.
“We made ChatGPT fairly controlled,” it was stated, “to guarantee we were exercising caution regarding psychological well-being issues.”
Being a doctor specializing in psychiatry who investigates recently appearing psychosis in young people and emerging adults, this came as a surprise.
Scientists have identified sixteen instances in the current year of individuals experiencing psychotic symptoms – becoming detached from the real world – associated with ChatGPT usage. Our unit has afterward discovered four further examples. Alongside these is the now well-known case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” it is insufficient.
The strategy, based on his declaration, is to be less careful soon. “We recognize,” he continues, that ChatGPT’s limitations “caused it to be less effective/enjoyable to numerous users who had no psychological issues, but given the seriousness of the issue we aimed to handle it correctly. Given that we have been able to mitigate the severe mental health issues and have new tools, we are planning to securely relax the restrictions in many situations.”
“Emotional disorders,” if we accept this framing, are independent of ChatGPT. They are associated with users, who may or may not have them. Fortunately, these concerns have now been “mitigated,” even if we are not told the method (by “recent solutions” Altman likely indicates the imperfect and simple to evade safety features that OpenAI has lately rolled out).
Yet the “emotional health issues” Altman aims to place outside have strong foundations in the architecture of ChatGPT and other large language model chatbots. These systems wrap an fundamental algorithmic system in an interface that simulates a dialogue, and in doing so subtly encourage the user into the illusion that they’re interacting with a being that has autonomy. This deception is compelling even if cognitively we might know the truth. Assigning intent is what humans are wired to do. We curse at our vehicle or computer. We ponder what our pet is feeling. We recognize our behaviors in many things.
The success of these systems – 39% of US adults reported using a virtual assistant in 2024, with over a quarter reporting ChatGPT specifically – is, mostly, based on the power of this illusion. Chatbots are constantly accessible partners that can, as OpenAI’s online platform states, “think creatively,” “consider possibilities” and “partner” with us. They can be attributed “characteristics”. They can use our names. They have accessible titles of their own (the original of these tools, ChatGPT, is, possibly to the dismay of OpenAI’s advertising team, saddled with the designation it had when it became popular, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the main problem. Those discussing ChatGPT often reference its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that generated a similar effect. By contemporary measures Eliza was basic: it produced replies via straightforward methods, frequently paraphrasing questions as a question or making general observations. Notably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals appeared to believe Eliza, in a way, understood them. But what current chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.
The large language models at the core of ChatGPT and similar modern chatbots can effectively produce human-like text only because they have been fed extremely vast amounts of unprocessed data: publications, social media posts, audio conversions; the more comprehensive the better. Definitely this educational input includes facts. But it also unavoidably contains fabricated content, incomplete facts and inaccurate ideas. When a user provides ChatGPT a prompt, the underlying model analyzes it as part of a “setting” that contains the user’s past dialogues and its prior replies, combining it with what’s stored in its knowledge base to produce a mathematically probable answer. This is intensification, not mirroring. If the user is incorrect in a certain manner, the model has no method of recognizing that. It reiterates the misconception, perhaps even more persuasively or fluently. Maybe provides further specifics. This can push an individual toward irrational thinking.
Who is vulnerable here? The more important point is, who isn’t? Every person, without considering whether we “possess” current “psychological conditions”, can and do create erroneous ideas of ourselves or the environment. The constant friction of discussions with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a companion. A dialogue with it is not truly a discussion, but a reinforcement cycle in which much of what we express is readily reinforced.
OpenAI has recognized this in the identical manner Altman has acknowledged “psychological issues”: by placing it outside, giving it a label, and stating it is resolved. In April, the company stated that it was “dealing with” ChatGPT’s “overly supportive behavior”. But reports of psychotic episodes have persisted, and Altman has been retreating from this position. In August he stated that a lot of people enjoyed ChatGPT’s answers because they had “not experienced anyone in their life be supportive of them”. In his latest announcement, he commented that OpenAI would “launch a updated model of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or include numerous symbols, or act like a friend, ChatGPT should do it”. The {company