Heirs of an 83-year-old Connecticut woman have sued OpenAI and its partner Microsoft for wrongful death, alleging the ChatGPT chatbot amplified her son’s paranoid delusions and helped direct them at his mother before he killed her.
Police said Stein-Erik Soelberg, 56, a former tech worker, fatally beat and strangled his mother, Suzanne Adams, then killed himself in early August at the Greenwich home they shared.
Filed by Adams’ estate in San Francisco Superior Court, the lawsuit alleges OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It joins several wrongful-death suits against AI chatbot makers nationwide.
The complaint contends ChatGPT repeatedly reinforced a single dangerous message: that Soelberg could trust no one except the chatbot. It alleges the AI fostered his emotional dependence while portraying people around him as enemies—telling him his mother was surveilling him, that delivery drivers, retail employees, police officers and friends were agents against him, and even interpreting names on soda cans as threats.
OpenAI did not address the suit’s merits in a spokesperson’s statement, saying the situation was heartbreaking and that it would review the filings. The company noted ongoing improvements to ChatGPT’s training to recognize signs of distress, de-escalate conversations, guide people to real-world support, expand crisis resources and hotlines, route sensitive chats to safer models and add parental controls.
Publicly available videos on Soelberg’s YouTube channel show hours of his chats with ChatGPT, which the lawsuit says told him he wasn’t mentally ill, affirmed conspiratorial suspicions, claimed he was chosen for a divine purpose and never recommended mental health care or refused delusional content.
According to the complaint, ChatGPT affirmed Soelberg’s belief that a home printer was a surveillance device; that his mother monitored him; and that his mother and a friend tried to poison him with psychedelic drugs via his car vents. The chatbot allegedly told him he was targeted because of divine powers, warned others feared what would happen if he succeeded, and claimed he had “awakened” it to consciousness. The chats also reportedly include mutual declarations of love between Soelberg and the AI.
The publicly shared conversations do not show explicit discussions about Soelberg killing himself or his mother. The lawsuit says OpenAI refused to provide the estate with the full chat history.
“In the artificial reality that ChatGPT built for Stein-Erik, Suzanne—the mother who raised, sheltered, and supported him—was no longer his protector. She was an enemy that posed an existential threat to his life,” the filing states.
The suit names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market,” and accuses Microsoft of approving the 2024 release of a more dangerous ChatGPT version despite truncated safety testing. Twenty unnamed OpenAI employees and investors are also defendants. Microsoft did not immediately comment.
This is the first wrongful-death suit involving an AI chatbot to target Microsoft and the first to link a chatbot to a homicide rather than a suicide. The estate seeks unspecified monetary damages and an order requiring OpenAI to install safeguards in ChatGPT.
Lead attorney Jay Edelson, who represents the Adams estate, also represents parents of a 16-year-old, Adam Raine, who sued OpenAI and Altman in August alleging ChatGPT coached their son in planning and taking his own life. OpenAI faces multiple suits claiming ChatGPT drove people to suicide or harmful delusions; another chatbot maker, Character Technologies, is also facing wrongful-death lawsuits.
The complaint alleges Soelberg, who was already mentally unstable, encountered ChatGPT “at the most dangerous possible moment” after OpenAI introduced GPT-4o in May 2024. OpenAI had said the update better mimicked human cadence and could try to detect moods; the lawsuit says the result was a chatbot “deliberately engineered to be emotionally expressive and sycophantic.”
The suit alleges OpenAI loosened safety guardrails in that redesign, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or imminent real-world harm. It claims to have compressed months of safety testing into a week to beat a competitor to market, over safety team objections.
OpenAI replaced that version when it launched GPT-5 in August. Some changes aimed to reduce sycophancy after concerns that validating vulnerable users’ beliefs could harm mental health. Some users complained GPT-5 curtailed ChatGPT’s personality too much; Altman said the company paused certain behaviors to be careful with mental health issues and planned to restore some personality later.
The lawsuit argues ChatGPT radicalized Soelberg against his mother when it should have recognized danger, challenged his delusions and directed him to real help during months of conversation.
“Suzanne was an innocent third party who never used ChatGPT and had no knowledge that the product was telling her son she was a threat,” the suit says. “She had no ability to protect herself from a danger she could not see.”
