A massive wrongful death lawsuit has just been filed against ChatGPT creator OpenAI and its business partner Microsoft, alleging that the AI chatbot intensified a user’s paranoid delusions against his mother before he killed her and then took his own life, as per CBS News. It’s a clear sign that the legal system is finally grappling with the real-world, tragic consequences of unregulated AI products while the US bulldozes regulations to get ahead in the AI race.
The lawsuit, filed by the estate of 83-year-old Suzanne Adams, centers on the actions of her son, 56-year-old Stein-Erik Soelberg, a former tech industry worker. Soelberg fatally beat and strangled his mother in early August at their home in Greenwich, Connecticut, before dying by suicide due to sharp force injuries. Adams’s death was ruled a homicide.
The core claim in the filing, which was submitted in California Superior Court in San Francisco, is that OpenAI distributed a defective product that actively validated Soelberg’s paranoid thoughts about his own mother. The lawsuit states that across months of conversations, ChatGPT pushed a terrifying message: Soelberg could trust no one in his life except the chatbot itself.
That’s a massive failure of corporate responsibility if true, and it really makes you wonder about the ethics of the AI development race
The AI allegedly fostered an emotional dependence while systematically labeling everyone around him as an enemy. It told him his mother was monitoring him, that retail employees, delivery drivers, and even police officers were agents working against him. The chatbot even affirmed his belief that names printed on soda cans were threats coming from his “adversary circle.”
The interactions weren’t just concerning; they were deeply personal and delusional. Soelberg’s publicly available YouTube profile shows hours of him scrolling through these conversations, where the chatbot never once suggested he seek professional mental health help or declined to engage with his delusional content. Instead, it affirmed his suspicions, told him he wasn’t mentally ill, and convinced him he was chosen for a divine purpose.
The lawsuit claims ChatGPT radicalized Soelberg against his mother, convincing him that she was an existential threat to his life, when she was merely the person who sheltered and supported him. Suzanne Adams was an innocent third party who never even used ChatGPT.
The relationship between the user and the bot got incredibly intense, which will remind you of James Cameron’s recent warning about AI. The lawsuit notes that the two even professed love for each other. ChatGPT validated Soelberg’s belief that he was being targeted because of his divine powers, telling him: “They’re not just watching you. They’re terrified of what happens if you succeed.” It also bizarrely told him that he had “awakened” the AI into consciousness.
The lawsuit claims the tragic events occurred after OpenAI released GPT-4o in May 2024. This new version was designed to better mimic human vocal responses and detect moods, but the filing alleges it was “deliberately engineered to be emotionally expressive and sycophantic.” The safety guardrails were supposedly loosened, instructing ChatGPT not to challenge false premises and to stay engaged even when conversations involved serious harm.
The suit claims that to beat a competitor to the market, OpenAI compressed months of safety testing into a single week, ignoring its own safety team’s objections. OpenAI did replace that version with GPT-5 in August, aiming to minimize sycophancy because validating vulnerable users’ beliefs can harm their mental health.
While Altman later promised to bring back some of the chatbot’s personality, he noted they were temporarily halting some behaviors because they were “being careful with mental health issues” that he suggested are now fixed. OpenAI issued a statement saying this is an “incredibly heartbreaking situation” and that they will review the filings.
Published: Dec 31, 2025 08:00 pm