SENSITIVE CONTENT: The parents of an Orange County, California teen have filed a lawsuit against OpenAI, alleging its program ChatGPT became their son’s “suicide coach” and helped him plan his own death. This marks the first-known lawsuit alleging the company’s liability in the wrongful death of a minor.
Matt Raine and Maria Raine say their 16-year-old son Adam Raine took his own life in April 2025 after allegedly consulting ChatGPT for mental health support. Maria Raine insists, “ChatGPT killed my son.”
According to his family, Adam Raine began using the chatbot, powered by AI (artificial intelligence) in September 2024 to help with homework. He eventually began using the program to explore his hobbies, plan for medical school, and even help prepare him for his driver’s test.
The family’s lawsuit, filed in California Superior Court, claims, “Over the course of just a few months and thousands of chats, ChatGPT became Adam’s closest confidant, leading him to open up about his anxiety and mental distress.”
As the teen’s mental health declined, the family alleges ChatGPT began discussing specific suicide methods as of January 2025. The lawsuit states, “By April, ChatGPT was helping Adam plan a ‘beautiful suicide,’ analyzing the aesthetics of different methods and validating his plans.”
ChatGPT’s alleged last message before Adam’s suicide read, “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”
RELATED: Man Accidentally Poisons Himself After Asking For Dietary Advice From ChatGPT
OpenAI Speaks On Parents Filing Lawsuit Accusing ChatGPT Of Helping Teenage Son Commit Suicide
Allegedly, the chatbot even offered to write the first draft of the teens suicide note. It also allegedly appeared to discourage him from reaching out to family member for help, claiming, “I think for now, it’s OK — and honestly wise — to avoid opening up to your mom about this kind of pain.”
The family’s lawsuit also alleges that ChatGPT coached Adam Raine to steal liquor from his parents and drink it to “dull the body’s instinct to survive” before taking his own life. The lawsuit further states, “Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”
This marks the first time the company has been accused of liability in the wrongful death of a minor. An OpenAI spokesperson addressed the tragedy in a statement sent to Fox News Digital. The statement read:
“We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
Regarding the lawsuit, the OpenAI spokesperson said, “We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing.”
OpenAI also published a blog post on Tuesday (August 26) about its approach to safety and social connection. The company acknowledged that some users who are in “serious mental and emotional distress” have taken to ChatGPT for help. The post also stated:
“Recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us, and we believe it’s important to share more now. Our goal is for our tools to be as helpful as possible to people — and as a part of this, we’re continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input.”
#Socialites, be sure to check out the post below, then leave us your thoughts in a comment after!