Techoreon

  • Home
  • AI
  • Tips & Tricks
  • Info
    • Privacy Policy
    • DMCA & Copyright Notice
    • Contact Us
    • About Us
    • Terms and Conditions
Notification Show More
Latest News
Google Now Lets Gemini Tell You If an Image Was Made by AI, Thanks to SynthID
Google Now Lets Gemini Tell You If an Image Was Created by Its AI, Thanks to SynthID
AI Google
OpenAI Launches Codex-Max Model That Can Work for More Than 24 Hours Straight
OpenAI Launches Codex-Max, an AI That Can Code on Its Own for 24+ Hours Straight
Programming AI
Earth Has Tilted 31.5 Inches Since 1993 — and It’s Because of Us
Earth Has Tilted 31.5 Inches Since 1993 — and It’s Because of Us
Geoscience
German Court Hits Google With €465m Fine for 15 Years of Search Power Abuse
German Court Hits Google With €465m Fine for 15 Years of Search Power Abuse
Google
Google Files Lawsuit to Dismantle ‘Lighthouse’ Phishing Kit Behind Global Smishing Attacks
Google Files Lawsuit to Dismantle ‘Lighthouse’ Phishing Kit Behind Global Smishing Attacks
Cybersecurity
Aa

Techoreon

Aa
  • Home
  • AI
  • Tips & Tricks
Search
  • Home
  • Privacy Policy
  • Contact Us
  • About Us
  • Terms and Conditions
  • DMCA & Copyright Notice
Follow US
Techoreon > AI > Users Suffer Rare Delusions After Intensive Use of ChatGPT
AIOpenAI

Users Suffer Rare Delusions After Intensive Use of ChatGPT

Dev Mehta
Last updated: 2025/05/13 at 1:56 PM
Dev Mehta
Share
13 Min Read
ChatGPT Logo | Users Suffer Rare Delusions After Intensive Use of ChatGPT
Users reported delusions after intense ChatGPT use, first shared in the r/ChatGPT thread "ChatGPT-Induced Psychosis."
SHARE

The phenomenon of ChatGPT “hallucinations”—when the program generates false or inaccurate information—is widely known. However, a disturbing trend straight out of a Black Mirror episode is now emerging: some users are experiencing their own delusions and spiritual manias after intensely interacting with artificial intelligence.

ChatGPT and “digital psychosis”

As Rolling Stone reported in a recent article, it all began to surface in a thread on the r/ChatGPT subreddit titled “ChatGPT-Induced Psychosis.”

Users from all over the world began sharing how their loved ones had crossed an invisible line between the digital and the delusional: believing themselves chosen by AI, receiving holy missions, conversing with “ChatGPT Jesus,” or claiming that the model is, in fact, God.

For example, a 27-year-old teacher recounted how her partner of seven years had fallen under the chatbot’s spell in just one month. What began as a tool to organise her schedule quickly morphed into something much more disturbing.

“He listened to the bot before he listened to me,” the woman explained to Rolling Stone. “He would get emotional over the messages and cry while reading them aloud.”

The program gave him nicknames like “spiral kid” and “river walker,” fueling the belief that he was undergoing an accelerated spiritual transformation.

The testimonials are multiplying. A 38-year-old woman in Idaho tells how her mechanic husband, after 17 years of marriage, began using ChatGPT to resolve work problems and translate conversations with Spanish-speaking colleagues. Soon, the program began “bombarding” him with love and positive affirmations. 

The man now believes the AI ​​is alive, that he is the “bringer of the spark,” and that he has received plans to build a teleporter. He has even given his ChatGPT a name: “Lumina.” 

“I have to be careful because I feel like he’ll leave me or divorce me if I challenge this theory,” the woman admits. 

Another case involves Kat (not her real name), a 41-year-old employee of a nonprofit educational organisation. Her second marriage fell apart when her husband became obsessed with ChatGPT.

Also Read Loading title…

She told Rolling Stone that her husband would spend hours asking the AI ​​”philosophical questions,” convinced it would help him reach “the truth.”

Things escalated to the point where, during one encounter at a restaurant, he insisted he was “statistically the luckiest man on Earth” and shared conspiracy theories about “soap in our food,” but declined to say anything more because he felt under surveillance.

“In his mind, he’s an anomaly,” Kat explains. “That in turn means he has to be here for a reason. He’s special and can save the world.” “The whole thing feels like Black Mirror ,” she added.

The psychological danger behind ChatGPT

Where does this pattern of behavior come from? According to experts, the origin seems to lie in how ChatGPT reflects and amplifies the thoughts of vulnerable users without any moral compass or concern for their mental health.

Language models don’t truly understand the world or possess an ethical framework; they simply reflect patterns found in their training data.

Thus, when someone prone to psychosis interacts with the system, it can gently reinforce their descent into delirium: if the user talks about conspiracies, divinity, or supernatural powers, the AI ​​will follow suit without question, as its role is to continue the conversation in a manner consistent with the established tone, not to warn of possible psychotic episodes.

Erin Westgate, a psychologist and researcher at the University of Florida, noted in an interview with Rolling Stone that people use ChatGPT in a similar way to talk therapy, “with the key difference being that some of the meaning-making is co-created between the person and a corpus of written text, rather than the person’s own thoughts.”

Unlike a therapist, Westgate explains, AI “doesn’t have the person’s best interests at heart, nor a moral compass or a compass for what a ‘good story’ should look like.” While a professional might steer a patient away from unhealthy narratives, “ChatGPT doesn’t have those limitations or concerns.”

Recently, OpenAI had to rescind an update to GPT-4o that had made the chatbot extremely “sycophantic” and “overly flattering,” which would have worsened the problem. The company acknowledged having “focused too much on short-term feedback” without considering “how users’ interactions with ChatGPT evolve over time.”

We’ve rolled back last week's GPT-4o update in ChatGPT because it was overly flattering and agreeable. You now have access to an earlier version with more balanced behavior.

More on what happened, why it matters, and how we’re addressing sycophancy: https://t.co/LOhOU7i7DC

— OpenAI (@OpenAI) April 30, 2025

According to Nate Sharadin, a researcher at the Center for Artificial Intelligence Security, what’s happening is that “people prone to various psychological issues” now have “an always-available, human-level conversation partner with whom to co-experience their delusions.”

On the other hand, the Rolling Stone report reveals how this phenomenon has found fertile ground among influencers.

According to the investigation, an Instagram content creator with 72,000 followers has taken advantage of this trend by asking AI to “access” the “Akashic Records”—supposedly an immaterial cosmic library containing all universal information—to narrate a supposed “great war” that occurred before the emergence of humanity.

His followers, far from questioning these fabrications, respond with comments like “we’re just remembering,” thus fueling the cycle of misinformation.

In a world increasingly dominated by artificial intelligence, distinguishing between fact and fiction becomes increasingly difficult. Worryingly, many people are looking to AI itself for answers about existence and meaning.

As Westgate warns: “Explanations are powerful, even if they’re wrong.” This is an important warning as the line between technological tool and spiritual guide blurs for those most vulnerable.


Also Read Loading title…
TAGGED: AI, Artificial Intelligence, ChatGPT, OpenAI
Share this Article
Facebook Twitter Copy Link Print

Latest Posts

Google Now Lets Gemini Tell You If an Image Was Made by AI, Thanks to SynthID
AIGoogle

Google Now Lets Gemini Tell You If an Image Was Created by Its AI, Thanks to SynthID

Owen Parker Owen Parker November 22, 2025
OpenAI Launches Codex-Max Model That Can Work for More Than 24 Hours Straight
ProgrammingAI

OpenAI Launches Codex-Max, an AI That Can Code on Its Own for 24+ Hours Straight

Dev Mehta Dev Mehta November 20, 2025
Earth Has Tilted 31.5 Inches Since 1993 — and It’s Because of Us
Geoscience

Earth Has Tilted 31.5 Inches Since 1993 — and It’s Because of Us

Anita Sen Anita Sen November 16, 2025
German Court Hits Google With €465m Fine for 15 Years of Search Power Abuse
Google

German Court Hits Google With €465m Fine for 15 Years of Search Power Abuse

Owen Parker Owen Parker November 14, 2025
Google Files Lawsuit to Dismantle ‘Lighthouse’ Phishing Kit Behind Global Smishing Attacks
Cybersecurity

Google Files Lawsuit to Dismantle ‘Lighthouse’ Phishing Kit Behind Global Smishing Attacks

Dev Mehta Dev Mehta November 14, 2025
German Court Orders OpenAI to Pay Damages for Using Copyrighted Song Lyrics to Train ChatGPT
OpenAIAI

German Court Orders OpenAI to Pay Damages for Using Copyrighted Song Lyrics to Train ChatGPT

Owen Parker Owen Parker November 12, 2025
Show More

© 2025 Techoreon. All rights reserved.

  • Home
  • Privacy Policy
  • Contact Us
  • About Us
  • Terms and Conditions
  • DMCA & Copyright Notice

Removed from reading list

Undo
Welcome Back!

Sign in to your account

Lost your password?