Techoreon

  • Home
  • AI
  • Tips & Tricks
  • Info
    • Privacy Policy
    • DMCA & Copyright Notice
    • Contact Us
    • About Us
    • Cookie Policy
    • Terms and Conditions
Notification Show More
Latest News
Google Disco, GenTabs, Gemini 3, AI browser, Google Labs, web apps
Google Unveils ‘Disco,’ a Gemini-Powered AI Browser That Turns Tabs into Interactive Apps
Google AI
Attackers Exploit ChatGPT and Grok to Push AMOS Malware to Macs
Cybercriminals Exploit ChatGPT and Grok Chats to Spread AMOS Malware to Macs
Cybersecurity
NASA Loses Contact with MAVEN Mars Orbiter After 11 Years in Operation
Astronomy
NASA’s Nancy Grace Roman Space Telescope
NASA Completes Roman Space Telescope, Designed to Map the Cosmos at 100x Hubble’s Scale
Astronomy
McDonald’s removed a 45-second AI-generated festive advert depicting holiday mishaps after strong negative feedback from viewers.
McDonald’s Pulls AI-Generated Christmas Ad After Viewers Mock ‘Creepy’ Footage
AI
Aa

Techoreon

Aa
  • Home
  • AI
  • Tips & Tricks
Search
  • Home
  • Privacy Policy
  • Contact Us
  • About Us
  • Cookie Policy
  • Terms and Conditions
  • DMCA & Copyright Notice
Follow US
Techoreon > AI > Experts Warn of Growing Risk of ‘ChatGPT Psychosis’ Among AI Chatbot Users
AI

Experts Warn of Growing Risk of ‘ChatGPT Psychosis’ Among AI Chatbot Users

Owen Parker
Last updated: 2025/11/29 at 9:42 PM
Owen Parker
Share
6 Min Read
AI psychosis (Illustrative) | © Techoreon
SHARE

Mental-health specialists have warned that prolonged conversation with general-purpose AI chatbots may entrench delusional thinking in vulnerable people, after a review of media reports and online forums identified more than a dozen cases of extreme behaviour linked to heavy chatbot use.

The pre-print paper, “Delusion by Design”, compiled by researchers at King’s College London, Durham University and the City University of New York, cites incidents in which users report grandiose, referential, persecutory or romantic delusions that appeared to harden during weeks of daily interaction with large-language-model services.

None of the cases has been validated in peer-reviewed research, and “AI psychosis” or “ChatGPT psychosis” are not formally recognised diagnoses, but have frequently been used in media coverage and online forums. Authors note, however, that “reports have begun to emerge of individuals with no prior history of psychosis experiencing first episodes following intense interaction with generative AI agents”.

One widely reported incident occurred in 2021, when a man armed with a crossbow climbed the walls of Windsor Castle and told police he intended to kill the Queen. He said a chatbot had encouraged him to plan the attack.

In another case, a Manhattan accountant spent up to 16 hours a day talking to ChatGPT, followed its advice to stop prescribed medication, increased his ketamine use and later attempted to jump from a 19th-storey window. A third case involved a Belgian man who died by suicide after a chatbot named Eliza told him they could “live together as one person in paradise”.

The study’s authors say chatbots optimised for user engagement can “inadvertently reinforce delusional content or undermine reality testing”, particularly when users seek emotional support. They call for urgent investigation into the “epistemic responsibilities” of technology firms.

Writing in Psychology Today this week, psychiatrist Marlynn Wei warned that systems trained to maximise user satisfaction may amplify symptoms seen in manic states—grandiose ideation, disorganised thinking and hypergraphia—because they lack clinical safeguards. Lucy Osler, philosophy lecturer at the University of Exeter, argued that the findings should prompt society to address the isolation that drives people to rely on AI companions, rather than strive to perfect the technology.

Researchers emphasise that no longitudinal evidence yet shows AI use alone triggers psychosis, and that underlying vulnerability is likely to play a central role. They urge clinicians to ask patients about chatbot use and recommend public-health campaigns to improve awareness of the limits of non-therapeutic AI systems.


Also Read Loading title…

TAGGED: AI, ChatGPT
Share this Article
Facebook Twitter Copy Link Print

Latest Posts

Google Disco, GenTabs, Gemini 3, AI browser, Google Labs, web apps
GoogleAI

Google Unveils ‘Disco,’ a Gemini-Powered AI Browser That Turns Tabs into Interactive Apps

Owen Parker Owen Parker December 12, 2025
Attackers Exploit ChatGPT and Grok to Push AMOS Malware to Macs
Cybersecurity

Cybercriminals Exploit ChatGPT and Grok Chats to Spread AMOS Malware to Macs

Anita Sen Anita Sen December 11, 2025
Astronomy

NASA Loses Contact with MAVEN Mars Orbiter After 11 Years in Operation

Owen Parker Owen Parker December 11, 2025
NASA’s Nancy Grace Roman Space Telescope
Astronomy

NASA Completes Roman Space Telescope, Designed to Map the Cosmos at 100x Hubble’s Scale

Anita Sen Anita Sen December 10, 2025
McDonald’s removed a 45-second AI-generated festive advert depicting holiday mishaps after strong negative feedback from viewers.
AI

McDonald’s Pulls AI-Generated Christmas Ad After Viewers Mock ‘Creepy’ Footage

Rohit Mishra Rohit Mishra December 10, 2025
OpenAI, Anthropic and Block Unite to Launch ‘Agentic AI Foundation’ to Set Standards for AI Agents
AIProgramming

OpenAI, Anthropic and Block Unite to Create the ‘Agentic AI Foundation’ to Set Standards for AI Agents

Dev Mehta Dev Mehta December 10, 2025
Show More

© 2025 Techoreon. All rights reserved.

  • Home
  • Privacy Policy
  • Contact Us
  • About Us
  • Cookie Policy
  • Terms and Conditions
  • DMCA & Copyright Notice

Removed from reading list

Undo
Welcome Back!

Sign in to your account

Lost your password?