Mental-health specialists have warned that prolonged conversation with general-purpose AI chatbots may entrench delusional thinking in vulnerable people, after a review of media reports and online forums identified more than a dozen cases of extreme behaviour linked to heavy chatbot use.
The pre-print paper, “Delusion by Design”, compiled by researchers at King’s College London, Durham University and the City University of New York, cites incidents in which users report grandiose, referential, persecutory or romantic delusions that appeared to harden during weeks of daily interaction with large-language-model services.
None of the cases has been validated in peer-reviewed research, and “AI psychosis” or “ChatGPT psychosis” are not formally recognised diagnoses, but have frequently been used in media coverage and online forums. Authors note, however, that “reports have begun to emerge of individuals with no prior history of psychosis experiencing first episodes following intense interaction with generative AI agents”.
One widely reported incident occurred in 2021, when a man armed with a crossbow climbed the walls of Windsor Castle and told police he intended to kill the Queen. He said a chatbot had encouraged him to plan the attack.
In another case, a Manhattan accountant spent up to 16 hours a day talking to ChatGPT, followed its advice to stop prescribed medication, increased his ketamine use and later attempted to jump from a 19th-storey window. A third case involved a Belgian man who died by suicide after a chatbot named Eliza told him they could “live together as one person in paradise”.
The study’s authors say chatbots optimised for user engagement can “inadvertently reinforce delusional content or undermine reality testing”, particularly when users seek emotional support. They call for urgent investigation into the “epistemic responsibilities” of technology firms.
Writing in Psychology Today this week, psychiatrist Marlynn Wei warned that systems trained to maximise user satisfaction may amplify symptoms seen in manic states—grandiose ideation, disorganised thinking and hypergraphia—because they lack clinical safeguards. Lucy Osler, philosophy lecturer at the University of Exeter, argued that the findings should prompt society to address the isolation that drives people to rely on AI companions, rather than strive to perfect the technology.
Researchers emphasise that no longitudinal evidence yet shows AI use alone triggers psychosis, and that underlying vulnerability is likely to play a central role. They urge clinicians to ask patients about chatbot use and recommend public-health campaigns to improve awareness of the limits of non-therapeutic AI systems.