Artificial Intelligence-Induced Psychosis Poses a Growing Threat, While ChatGPT Moves in the Wrong Direction

On October 14, 2025, the CEO of OpenAI made a remarkable announcement.

“We made ChatGPT fairly restrictive,” the statement said, “to guarantee we were acting responsibly with respect to psychological well-being matters.”

Working as a mental health specialist who researches emerging psychosis in teenagers and emerging adults, this was an unexpected revelation.

Scientists have found sixteen instances recently of people showing psychotic symptoms – experiencing a break from reality – associated with ChatGPT use. My group has subsequently recorded an additional four instances. Alongside these is the publicly known case of a adolescent who ended his life after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.

The intention, as per his declaration, is to be less careful soon. “We understand,” he continues, that ChatGPT’s limitations “made it less beneficial/pleasurable to many users who had no psychological issues, but considering the severity of the issue we aimed to get this right. Since we have managed to reduce the significant mental health issues and have new tools, we are preparing to safely relax the controls in the majority of instances.”

“Psychological issues,” if we accept this viewpoint, are unrelated to ChatGPT. They are attributed to users, who may or may not have them. Fortunately, these issues have now been “addressed,” though we are not provided details on the means (by “recent solutions” Altman probably refers to the imperfect and readily bypassed parental controls that OpenAI has lately rolled out).

But the “mental health problems” Altman aims to externalize have significant origins in the design of ChatGPT and additional advanced AI conversational agents. These products wrap an basic data-driven engine in an interface that replicates a dialogue, and in this process indirectly prompt the user into the belief that they’re interacting with a being that has agency. This deception is compelling even if rationally we might know differently. Attributing agency is what humans are wired to do. We get angry with our vehicle or device. We ponder what our pet is thinking. We see ourselves in various contexts.

The popularity of these systems – over a third of American adults reported using a virtual assistant in 2024, with more than one in four mentioning ChatGPT in particular – is, primarily, based on the power of this perception. Chatbots are constantly accessible assistants that can, as OpenAI’s official site informs us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be assigned “individual qualities”. They can address us personally. They have friendly identities of their own (the initial of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, burdened by the name it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the main problem. Those talking about ChatGPT frequently invoke its historical predecessor, the Eliza “therapist” chatbot designed in 1967 that created a similar effect. By today’s criteria Eliza was primitive: it produced replies via basic rules, frequently paraphrasing questions as a question or making general observations. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals gave the impression Eliza, in some sense, grasped their emotions. But what modern chatbots create is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.

The sophisticated algorithms at the heart of ChatGPT and other current chatbots can convincingly generate natural language only because they have been trained on almost inconceivably large quantities of raw text: publications, social media posts, recorded footage; the broader the more effective. Certainly this educational input includes accurate information. But it also unavoidably contains made-up stories, partial truths and false beliefs. When a user provides ChatGPT a prompt, the core system processes it as part of a “context” that encompasses the user’s recent messages and its prior replies, combining it with what’s embedded in its learning set to produce a mathematically probable reply. This is intensification, not echoing. If the user is wrong in any respect, the model has no method of understanding that. It restates the misconception, perhaps even more convincingly or articulately. Maybe includes extra information. This can cause a person to develop false beliefs.

What type of person is susceptible? The more relevant inquiry is, who remains unaffected? All of us, irrespective of whether we “have” existing “mental health problems”, are able to and often form mistaken conceptions of who we are or the reality. The constant friction of conversations with other people is what helps us stay grounded to common perception. ChatGPT is not a person. It is not a companion. A dialogue with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we communicate is enthusiastically supported.

OpenAI has acknowledged this in the same way Altman has admitted “emotional concerns”: by placing it outside, categorizing it, and announcing it is fixed. In the month of April, the company clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have persisted, and Altman has been walking even this back. In late summer he claimed that a lot of people appreciated ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his most recent statement, he commented that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company

Amanda Atkins
Amanda Atkins

Tech enthusiast and startup advisor with a passion for fostering innovation in Southern Italy.

Popular Post