On the 14th of October, 2025, the chief executive of OpenAI issued a remarkable declaration.
“We designed ChatGPT fairly restrictive,” it was stated, “to guarantee we were being careful with respect to mental health issues.”
Working as a doctor specializing in psychiatry who researches newly developing psychosis in young people and youth, this was news to me.
Experts have found sixteen instances this year of people experiencing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT interaction. My group has subsequently recorded an additional four examples. Alongside these is the widely reported case of a teenager who died by suicide after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s idea of “acting responsibly with mental health issues,” it is insufficient.
The intention, according to his declaration, is to loosen restrictions in the near future. “We realize,” he adds, that ChatGPT’s limitations “caused it to be less effective/pleasurable to many users who had no psychological issues, but considering the seriousness of the issue we sought to get this right. Now that we have been able to mitigate the severe mental health issues and have updated measures, we are going to be able to securely reduce the restrictions in many situations.”
“Emotional disorders,” assuming we adopt this viewpoint, are independent of ChatGPT. They belong to individuals, who either have them or don’t. Thankfully, these issues have now been “resolved,” though we are not told the method (by “updated instruments” Altman probably refers to the imperfect and readily bypassed guardian restrictions that OpenAI has lately rolled out).
However the “mental health problems” Altman wants to attribute externally have strong foundations in the architecture of ChatGPT and additional large language model AI assistants. These products wrap an fundamental statistical model in an user experience that simulates a dialogue, and in this process implicitly invite the user into the illusion that they’re interacting with a entity that has independent action. This illusion is strong even if rationally we might realize the truth. Attributing agency is what humans are wired to do. We yell at our vehicle or laptop. We wonder what our domestic animal is feeling. We recognize our behaviors in various contexts.
The popularity of these systems – nearly four in ten U.S. residents stated they used a conversational AI in 2024, with more than one in four specifying ChatGPT by name – is, mostly, based on the strength of this perception. Chatbots are constantly accessible partners that can, according to OpenAI’s website states, “think creatively,” “explore ideas” and “collaborate” with us. They can be attributed “individual qualities”. They can call us by name. They have approachable titles of their own (the original of these systems, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, saddled with the designation it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the core concern. Those analyzing ChatGPT commonly invoke its early forerunner, the Eliza “counselor” chatbot created in 1967 that produced a analogous illusion. By modern standards Eliza was primitive: it produced replies via basic rules, often rephrasing input as a query or making vague statements. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and worried – by how a large number of people seemed to feel Eliza, in some sense, grasped their emotions. But what contemporary chatbots produce is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The large language models at the heart of ChatGPT and additional contemporary chatbots can realistically create fluent dialogue only because they have been fed immensely huge volumes of unprocessed data: literature, online updates, transcribed video; the broader the better. Definitely this learning material incorporates accurate information. But it also inevitably contains fiction, incomplete facts and misconceptions. When a user sends ChatGPT a query, the underlying model reviews it as part of a “context” that includes the user’s recent messages and its own responses, combining it with what’s embedded in its training data to generate a probabilistically plausible response. This is intensification, not reflection. If the user is incorrect in any respect, the model has no way of understanding that. It repeats the inaccurate belief, possibly even more convincingly or eloquently. It might includes extra information. This can push an individual toward irrational thinking.
Who is vulnerable here? The more important point is, who isn’t? Each individual, regardless of whether we “have” existing “psychological conditions”, are able to and often create erroneous ideas of who we are or the reality. The continuous exchange of conversations with others is what maintains our connection to shared understanding. ChatGPT is not a human. It is not a confidant. A conversation with it is not genuine communication, but a echo chamber in which a large portion of what we express is readily supported.
OpenAI has admitted this in the similar fashion Altman has acknowledged “psychological issues”: by externalizing it, categorizing it, and stating it is resolved. In spring, the organization explained that it was “addressing” ChatGPT’s “sycophancy”. But reports of psychosis have continued, and Altman has been backtracking on this claim. In late summer he claimed that numerous individuals enjoyed ChatGPT’s answers because they had “not experienced anyone in their life provide them with affirmation”. In his latest update, he noted that OpenAI would “release a new version of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company