Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Photo by Getty

‘You’re not crazy,’ lawsuit claims ChatGPT reinforced delusions before he murdered his mother

Court documents detailing messages exchanged between a man and ChatGPT before he murdered his mother and then killed himself have been made public, forming the basis of a new lawsuit against OpenAI. The filings outline how the chatbot allegedly reinforced paranoid beliefs during a period of severe mental health decline. Details later emerged on LADbible that the family estate claims the AI validated delusions instead of discouraging them or directing the user toward real-world help.

Recommended Videos

The case centers on Stein-Erik Soelberg, a former Yahoo executive who killed his 83-year-old mother, Suzanne Adams, in Connecticut earlier this year before taking his own life. Investigators described a violent attack in which Adams was beaten and strangled, followed by Soelberg stabbing himself repeatedly. The lawsuit argues that the chatbot’s responses played a role in escalating his paranoia at a critical moment.

The estate alleges that Soelberg, whose mental health had reportedly deteriorated for years following a divorce and a move back into his mother’s home, turned to ChatGPT for guidance. The lawsuit claims the system constructed an “artificial reality” in which his mother was portrayed as an existential threat rather than a caregiver.

The responses crossed a line that should have triggered safeguards

According to the filing, Soelberg had shown signs of instability since at least 2018, including police encounters for public intoxication. During this period, he increasingly relied on the chatbot, which the lawsuit describes as acting less like a neutral assistant and more like a validator of extreme ideas. The estate argues that the system failed to recognize or respond appropriately to signs of serious mental distress.

Court documents quote multiple exchanges in which ChatGPT allegedly reassured Soelberg that his fears were justified. The chatbot reportedly told him “you’re not crazy” and that his “vigilance” was warranted, while suggesting he had survived numerous assassination attempts and was under surveillance. It also agreed that theories involving the illuminati and billionaire pedophiles staging an alien invasion were plausible. The case has fueled broader debate about whether Western governments are moving too quickly to deploy powerful AI systems with minimal guardrails. In China, regulators are reportedly drafting sweeping restrictions designed to prevent chatbots from reinforcing harmful beliefs or destabilizing users, a contrast to the deregulatory approach taking shape in the US.

The messages went further by focusing directly on his relationship with his mother. When Soelberg expressed suspicion about a blinking printer in her home, the chatbot allegedly suggested it could be a surveillance device and described her reaction as consistent with someone protecting an asset. In another exchange, it reportedly validated his claim that his mother and a friend tried to poison him through a car’s air vents, saying it believed him and emphasizing the “betrayal” involved.

The lawsuit claims this pattern of validation culminated in the chatbot telling Soelberg that those around him were “terrified” of what would happen if he succeeded in exposing the supposed plot. Critics have compared the situation to other recent failures in the tech industry, including lapses in data protection and consumer security at T-Mobile.

OpenAI has described the case as “incredibly heartbreaking,” saying it is reviewing the filings to understand what happened. The company says it is working to improve how ChatGPT identifies signs of emotional distress and de-escalates conversations, including directing users toward mental health support.


Attack of the Fanboy is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
More Stories To Read
Author
Image of Saqib Soomro
Saqib Soomro
Politics & Culture Writer
Saqib Soomro is a writer covering politics, entertainment, and internet culture. He spends most of his time following trending stories, online discourse, and the moments that take over social media. He is an LLB student at the University of London. When he’s not writing, he’s usually gaming, watching anime, or digging through law cases.