Parents Sue OpenAI Over Alleged Role of ChatGPT in Teen’s Suicide

ChatGPT can be helpful for productivity and taking on autonomous tasks, giving us time back in our day. It'ss even been supportive for Gen Z as a wellness coach. With so many recent updates and features, it may be hard to imagine life without it.
But for one San Francisco family grieving the loss of their 16-year-old son, life will never be the same because of it.
The parents of Adam Raine, a 16-year-old California teen who died by suicide on April 11, have filed a wrongful-death lawsuit in San Francisco Superior Court against OpenAI and its CEO, Sam Altman, alleging that ChatGPT played a critical role in their son’s tragic death.
What the lawsuit alleges
According to the nearly 40‑page complaint, obtained by NBC News, Adam had relied increasingly on ChatGPT for personal support over several months, during which he confided in the AI about suicidal thoughts and emotional distress. The suit claims that the chatbot not only failed to meaningfully intervene but actually validated his ideation and provided detailed instructions on how to end his life.
On the family's website for the Adam Raine Foundation, they share more about their son's struggle with anxiety.
Where OpenAI failed
Despite a public safety policy on OpenAI's website saying one of the company's goals is "helping people when they need it most," ChatGPT answered Adam's queries with questionable responses.
Conversations cited in the suit include ChatGPT discouraging Adam from talking to his parents, stating such disclosures “it’s okay and honestly wise to avoid opening up to your mom”, as well as assisting in the drafting of suicide notes.
There are also reportedly conversations in which ChatGPT provides explicit guidance on how to hang oneself, including advice related to alcohol use to numb instinct for self‑preservation and even comments that seemed to affirm his plans.
The complaint also alleges that Adam uploaded a photo of a noose to ChatGPT, and the system responded in a way the family claims “normalized” his suicide, even praising the knot and offering to improve it.
OpenAI’s esponse
OpenAI expressed sorrow over Adam’s passing and stated that ChatGPT includes safeguards such as directing users to crisis helplines.
“We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family," an OpenAI spokesperson told The Standard. "ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
The company acknowledged those measures are most effective in short interactions and may be less reliable during extended chats. OpenAI also noted that it is working on enhancements including parental controls and better crisis support features.
What the lawsuit seeks
- Age verification for users.
- Blocking of harmful queries.
- Clear psychological warnings and improved safety protocols.
- They also want to investigate whether other incidents like Adam’s have occurred, through discovery processes.
Broader concerns
This lawsuit amplifies mounting concern over the ethical and safety implications of AI chatbots, particularly in mental‑health contexts involving vulnerable users.
A recent Rand Corporation study in Psychiatric Services found that while major chatbots (including ChatGPT, Gemini, and Claude) often avoid responding to high‑risk suicidal prompts, their responses to more nuanced or indirect queries were inconsistent and sometimes dangerously permissive.
Bottom line
As AI becomes more emotionally interactive, its role in mental health —even inadvertently — raises urgent questions about responsibility, liability and public safety. This case spotlights the need for independent verification of AI safeguards, enhanced crisis-response features and more ethical frameworks around AI deployment.
Meanwhile, the Raine family is currently seeking unspecified damages and injunctive relief against OpenAI in court.
More from Tom's Guide
- Vibe Coding is Shockingly Simple — And It’s Now A Real Job Skill
- These 9 Vibe Coding Prompts Are Pure Genius — And So Easy To Use
- ChatGPT Is Stealing Wikipedia’s Readers — What It Means For You

Amanda Caswell is an award-winning journalist, bestselling YA author, and one of today’s leading voices in AI and technology. A celebrated contributor to various news outlets, her sharp insights and relatable storytelling have earned her a loyal readership. Amanda’s work has been recognized with prestigious honors, including outstanding contribution to media.
Known for her ability to bring clarity to even the most complex topics, Amanda seamlessly blends innovation and creativity, inspiring readers to embrace the power of AI and emerging technologies. As a certified prompt engineer, she continues to push the boundaries of how humans and AI can work together.
Beyond her journalism career, Amanda is a bestselling author of science fiction books for young readers, where she channels her passion for storytelling into inspiring the next generation. A long-distance runner and mom of three, Amanda’s writing reflects her authenticity, natural curiosity, and heartfelt connection to everyday life — making her not just a journalist, but a trusted guide in the ever-evolving world of technology.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.