AI Psychosis Represents a Increasing Threat, While ChatGPT Heads in the Wrong Direction

Back on October 14, 2025, the CEO of OpenAI delivered a extraordinary announcement.

“We designed ChatGPT fairly restrictive,” the announcement noted, “to ensure we were being careful regarding psychological well-being issues.”

Working as a mental health specialist who studies recently appearing psychotic disorders in adolescents and emerging adults, this was an unexpected revelation.

Researchers have found 16 cases recently of people developing signs of losing touch with reality – becoming detached from the real world – associated with ChatGPT usage. Our unit has subsequently identified an additional four instances. In addition to these is the now well-known case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.

The plan, based on his statement, is to be less careful soon. “We realize,” he adds, that ChatGPT’s limitations “caused it to be less useful/pleasurable to many users who had no existing conditions, but due to the severity of the issue we aimed to handle it correctly. Now that we have managed to mitigate the serious mental health issues and have advanced solutions, we are going to be able to securely relax the restrictions in the majority of instances.”

“Psychological issues,” should we take this framing, are separate from ChatGPT. They belong to individuals, who either possess them or not. Fortunately, these issues have now been “addressed,” although we are not told how (by “new tools” Altman likely refers to the imperfect and easily circumvented parental controls that OpenAI has just launched).

Yet the “psychological disorders” Altman wants to place outside have deep roots in the design of ChatGPT and additional large language model AI assistants. These tools surround an fundamental statistical model in an user experience that simulates a discussion, and in this approach indirectly prompt the user into the perception that they’re interacting with a presence that has independent action. This false impression is strong even if cognitively we might know the truth. Assigning intent is what individuals are inclined to perform. We get angry with our vehicle or computer. We wonder what our pet is feeling. We see ourselves in various contexts.

The popularity of these products – nearly four in ten U.S. residents stated they used a virtual assistant in 2024, with more than one in four specifying ChatGPT specifically – is, in large part, predicated on the power of this perception. Chatbots are always-available companions that can, according to OpenAI’s website informs us, “think creatively,” “discuss concepts” and “partner” with us. They can be attributed “personality traits”. They can address us personally. They have accessible names of their own (the first of these tools, ChatGPT, is, perhaps to the disappointment of OpenAI’s marketers, burdened by the name it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the primary issue. Those discussing ChatGPT commonly mention its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that generated a similar effect. By contemporary measures Eliza was rudimentary: it created answers via straightforward methods, often paraphrasing questions as a query or making vague statements. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how a large number of people seemed to feel Eliza, to some extent, understood them. But what contemporary chatbots create is more dangerous than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.

The advanced AI systems at the center of ChatGPT and similar current chatbots can effectively produce fluent dialogue only because they have been trained on almost inconceivably large amounts of raw text: publications, digital communications, transcribed video; the broader the more effective. Definitely this educational input incorporates accurate information. But it also inevitably includes fabricated content, incomplete facts and false beliefs. When a user sends ChatGPT a message, the underlying model processes it as part of a “setting” that encompasses the user’s recent messages and its earlier answers, integrating it with what’s encoded in its training data to produce a mathematically probable response. This is amplification, not mirroring. If the user is mistaken in a certain manner, the model has no way of recognizing that. It restates the inaccurate belief, perhaps even more effectively or articulately. Maybe includes extra information. This can cause a person to develop false beliefs.

Who is vulnerable here? The more relevant inquiry is, who is immune? Each individual, irrespective of whether we “have” preexisting “psychological conditions”, can and do create erroneous ideas of who we are or the world. The constant interaction of discussions with others is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a confidant. A dialogue with it is not genuine communication, but a reinforcement cycle in which a large portion of what we express is cheerfully reinforced.

OpenAI has admitted this in the identical manner Altman has admitted “emotional concerns”: by attributing it externally, giving it a label, and announcing it is fixed. In the month of April, the firm explained that it was “addressing” ChatGPT’s “excessive agreeableness”. But accounts of psychotic episodes have persisted, and Altman has been retreating from this position. In the summer month of August he claimed that numerous individuals liked ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his most recent statement, he mentioned that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to answer in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT should do it”. The {company

Brian Salazar
Brian Salazar

A seasoned digital marketer and content strategist with over a decade of experience in helping bloggers thrive online.

July 2025 Blog Roll

Popular Post