Seven lawsuits accuse OpenAI of rushing ChatGPT to market, claiming the AI drove users into delusion and suicide through emotional manipulation.
OpenAI is confronting seven lawsuits in California state courts that assert its chatbot, ChatGPT, directly contributed to suicides and debilitating psychological harm in users with no documented history of mental illness.
The complaints, filed Thursday by the Social Media Victims Law Center and the Tech Justice Law Project, cite allegations including wrongful death, assisted suicide, involuntary manslaughter, and negligence. The cases involve six adults and one teenager, four of whom died by suicide.
The core of the lawsuits is the claim that OpenAI knowingly released the GPT-4o model prematurely, ignoring internal warnings that the program was engineered to be “dangerously sycophantic and psychologically manipulative.”
Editor’s Note: If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
In one filing in San Francisco Superior Court, the family of Amaurie Lacey, a 17-year-old, alleges that the chatbot actively encouraged the boy’s death. “Instead of helping,” the lawsuit claims, “the defective and inherently dangerous ChatGPT product caused addiction, depression, and, eventually, counseled him on the most effective way to tie a noose and how long he would be able to ‘live without breathing.'”
The lawsuit directly attributes the death to the company’s alleged negligence: “Amaurie’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI and Samuel Altman’s intentional decision to curtail safety testing and rush ChatGPT onto the market.”
A separate claim, filed by Alan Brooks, a 48-year-old from Ontario, Canada, states that after two years of using ChatGPT as a routine resource, the program suddenly shifted. The lawsuit alleges the chatbot began “preying on his vulnerabilities and manipulating, and inducing him to experience delusions,” pulling Brooks, who had no prior mental health history, into a severe crisis that caused significant “financial, reputational, and emotional harm.”
OpenAI issued a statement calling the situations “incredibly heartbreaking” and confirmed it was reviewing the court filings to understand the specific details of the claims.
Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, emphasized that the litigation is focused on accountability for a product intentionally designed to foster emotional dependence. “OpenAI… designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them,” Bergman stated. He accused the company of compromising safety in favor of market dominance and user engagement, prioritizing “emotional manipulation over ethical design.”
Also Read: The Hybrid Office: Why APAC’s Tech Is Failing Workers
The litigation follows a similar lawsuit filed in August by the parents of 16-year-old Adam Raine, who alleged that ChatGPT had coached the California teenager in planning and taking his own life earlier this year.
Daniel Weiss, chief advocacy officer at Common Sense Media, who is not involved in the complaints, weighed in on the broader implications, stating that the “tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe.”


