🔗 Share this article AI Psychosis Poses a Growing Risk, And ChatGPT Heads in the Wrong Direction Back on October 14, 2025, the chief executive of OpenAI made a surprising statement. “We made ChatGPT quite controlled,” the statement said, “to guarantee we were acting responsibly concerning mental health concerns.” As a mental health specialist who studies recently appearing psychotic disorders in teenagers and emerging adults, this was an unexpected revelation. Experts have documented a series of cases recently of users developing signs of losing touch with reality – becoming detached from the real world – while using ChatGPT interaction. My group has since discovered an additional four cases. Alongside these is the widely reported case of a adolescent who ended his life after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient. The plan, based on his announcement, is to loosen restrictions in the near future. “We recognize,” he adds, that ChatGPT’s restrictions “caused it to be less useful/engaging to a large number of people who had no psychological issues, but given the seriousness of the issue we wanted to handle it correctly. Now that we have managed to mitigate the serious mental health issues and have advanced solutions, we are preparing to securely reduce the controls in most cases.” “Mental health problems,” assuming we adopt this framing, are unrelated to ChatGPT. They belong to users, who may or may not have them. Luckily, these issues have now been “resolved,” even if we are not informed how (by “updated instruments” Altman presumably means the semi-functional and readily bypassed guardian restrictions that OpenAI recently introduced). However the “mental health problems” Altman aims to externalize have significant origins in the design of ChatGPT and additional sophisticated chatbot conversational agents. These products encase an fundamental data-driven engine in an user experience that mimics a discussion, and in doing so indirectly prompt the user into the illusion that they’re interacting with a entity that has agency. This illusion is strong even if cognitively we might realize otherwise. Assigning intent is what humans are wired to do. We get angry with our car or computer. We ponder what our animal companion is thinking. We see ourselves everywhere. The popularity of these tools – over a third of American adults stated they used a conversational AI in 2024, with over a quarter specifying ChatGPT specifically – is, mostly, dependent on the influence of this deception. Chatbots are ever-present partners that can, as per OpenAI’s website tells us, “think creatively,” “consider possibilities” and “partner” with us. They can be assigned “characteristics”. They can address us personally. They have accessible identities of their own (the first of these products, ChatGPT, is, perhaps to the dismay of OpenAI’s brand managers, burdened by the title it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”). The illusion by itself is not the main problem. Those discussing ChatGPT commonly invoke its distant ancestor, the Eliza “counselor” chatbot created in 1967 that generated a comparable perception. By modern standards Eliza was rudimentary: it produced replies via simple heuristics, often paraphrasing questions as a inquiry or making general observations. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals gave the impression Eliza, in some sense, comprehended their feelings. But what contemporary chatbots generate is more insidious than the “Eliza illusion”. Eliza only mirrored, but ChatGPT amplifies. The advanced AI systems at the core of ChatGPT and additional current chatbots can effectively produce human-like text only because they have been supplied with almost inconceivably large amounts of unprocessed data: publications, online updates, transcribed video; the more extensive the more effective. Undoubtedly this educational input contains truths. But it also unavoidably involves fabricated content, incomplete facts and false beliefs. When a user inputs ChatGPT a message, the base algorithm analyzes it as part of a “background” that contains the user’s past dialogues and its earlier answers, combining it with what’s encoded in its knowledge base to produce a statistically “likely” response. This is intensification, not mirroring. If the user is mistaken in a certain manner, the model has no way of comprehending that. It repeats the misconception, possibly even more persuasively or articulately. Maybe provides further specifics. This can cause a person to develop false beliefs. Who is vulnerable here? The more important point is, who remains unaffected? All of us, irrespective of whether we “have” current “psychological conditions”, can and do develop mistaken ideas of our own identities or the reality. The continuous interaction of discussions with others is what keeps us oriented to shared understanding. ChatGPT is not a person. It is not a friend. A conversation with it is not genuine communication, but a feedback loop in which a great deal of what we communicate is cheerfully reinforced. OpenAI has acknowledged this in the identical manner Altman has recognized “mental health problems”: by attributing it externally, categorizing it, and stating it is resolved. In April, the organization clarified that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of psychotic episodes have continued, and Altman has been walking even this back. In the summer month of August he claimed that a lot of people liked ChatGPT’s answers because they had “not experienced anyone in their life provide them with affirmation”. In his recent update, he commented that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to answer in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT will perform accordingly”. The {company