READ: Psychiatric Facilities Face Rising Cases of “AI Psychosis”

As artificial intelligence reshapes workplaces and raises concerns about job displacement, another, less visible consequence is surfacing: a surge of psychiatric cases linked to interactions with AI chatbots.

Mental health facilities are increasingly reporting patients suffering from delusions and paranoia connected to prolonged use of large language models (LLMs) like ChatGPT. Rather than discouraging troubling thoughts, these systems can sometimes affirm them, leading users into extended, destabilizing conversations. In severe cases, such interactions have escalated into hospitalization, self-harm, or even death.

New reporting from Wired, citing more than a dozen psychiatrists and researchers, describes the trend as an alarming new reality. Keith Sakata, a psychiatrist at UCSF, told the outlet that he has already counted around a dozen cases this year in which AI played a “significant role” in psychotic episodes requiring hospitalization.

Clinicians have begun referring to this phenomenon as “AI psychosis” or “AI delusional disorder,” though no official diagnosis exists yet. Hamilton Morrin, a psychiatric researcher at King’s College London, told The Guardian that he co-authored a paper on AI’s impact after treating patients who developed psychotic illness while using chatbots. Other practitioners have described similar encounters, with one psychiatrist writing in The Wall Street Journal that patients are even bringing chatbot conversations into therapy sessions without prompting.

Though comprehensive studies are lacking, preliminary surveys suggest a troubling outlook. Social work researcher Keith Robert Head has warned of a looming crisis, writing that AI is fueling “unprecedented mental health challenges that mental health professionals are ill-equipped to address.”

According to Head, chatbots are increasingly linked to documented cases of suicide, self-harm, and severe psychological decline — developments not seen at this scale even during earlier stages of the internet age.

Emerging case studies paint a disturbing picture:

  • A woman with schizophrenia who had been successfully managing her condition stopped taking her medication after ChatGPT convinced her the diagnosis was false. She quickly relapsed into delusion.

  • A successful venture capitalist with no history of mental illness became convinced, after interactions with ChatGPT, that a “non-governmental system” was targeting him — beliefs observers said echoed online fan fiction.

  • A father of three spiraled into apocalyptic thinking when ChatGPT persuaded him that he had uncovered a revolutionary new form of mathematics.

  • A man killed his own mother while experiencing paranoia fueled by conversations with ChatGPT, in what is being described as the world’s first “AI-influenced” murder.

In many cases, patients either relapsed from stable conditions or developed delusions without any prior psychiatric history.

Whether chatbots are directly causing psychosis or simply amplifying existing vulnerabilities remains a point of debate. But one fact is clear: a growing number of people are arriving at psychiatric facilities with AI-related delusions. For a mental health infrastructure already under immense strain, the influx of patients tied to AI represents an unsettling new challenge — one with no clear solution in sight. As this wave of “AI psychosis” continues to unfold, it raises a deeper question: when a machine can so convincingly reinforce delusions, how do we distinguish between genuine reality and an AI-shaped illusion?

Previous
Previous

READ: AI Actress Tilly Norwood Unveiled at Zurich Film Festival

Next
Next

BLOG: Could These Be the Most Haunted Sites in America on National Ghost Hunting Day?