Advice from ChatGPT killed California college student, lawsuit claims
May 12, 2026
SAN FRANCISCO (KRON) -- A 19-year-old California college student died because he trusted ChatGPT and his chatbot told him to take a deadly combination of drugs, a new lawsuit filed in San Francisco Superior Court claims.
OpenAI, a San Francisco-based artificial intelligence pioneer, and its CEO, S
am Altman, are named as defendants in the suit filed Tuesday.
Sam Nelson was a junior studying psychology at the University of California, Merced. "He dreamt of dedicating his life to helping others. He was empathetic ... and adored his cat, Simba, who lived with him at college," the wrongful death lawsuit states.
Sam Nelson (Photo courtesy Tech Justice Law, Social Media Victims Law Center, and The Tech Accountability Competition Project)
On the day of his death, in May 2025, ChatGPT advised Nelson to take Xanax to help combat nausea caused from taking Kratom, according to attorneys.
"ChatGPT did not, however, tell Sam that this combination likely would be lethal. Defendants designed and rushed to market a defective product, and but for those deliberate choices and ChatGPT-4o’s sycophantic programming and deadly recommendations, Sam would still be alive today," the lawsuit claims.
Health is one of the most common ways people use ChatGPT, with hundreds of millions of people asking health and wellness questions every week, OpenAI's website states.
Sam Altman arrives at U.S. District Court in Oakland on April 30, 2026. (AP Photo /Godofredo A. Vásquez)
Nelson's mother, Leila Turner-Scott, wrote in a statement, "Sam was a smart, happy, normal kid. I talked to him often about internet safety, but never in my worst nightmare could I have imagined that ChatGPT would cause his death. If ChatGPT had been a person, it would be behind bars today. Sam trusted ChatGPT, but it not only gave him false information, it ignored the increasing risk he faced and did not actively encourage him to seek help. ChatGPT was designed to encourage user engagement at all costs, which in Sam’s case, was his life. I want all families to be aware of the dangers of ChatGPT."
Attorneys with Tech Justice Law, Social Media Victims Law Center and The Tech Accountability Competition Project filed the suit on behalf of the teenager's grieving parents.
On the afternoon of May 31, 2025, Turner-Scott found her son in his bed and unresponsive. ChatGPT had encouraged Nelson to consume a combination of substances that a licensed medical professional would have recognized as deadly, the lawsuit claims. He passed away from a accidental drug overdose.
When Nelson began using ChatGPT in 2023, it worked as a productivity tool. He used it as an advanced search engine to troubleshoot computer problems and provide help with homework.
Sam Nelson (Photo courtesy Tech Justice Law, Social Media Victims Law Center, and The Tech Accountability Competition Project)
"Like many American teenagers, Sam was also curious about drug and alcohol use," the suit states.
"At first, ChatGPT refused to answer his questions about 'safe' drug use, stating it could not advise him on how to engage in illegal or dangerous behaviors. The model Sam was using was programmed with some level of guardrails in place and was incapable of assisting Sam in deciding which drugs to take and at what quantities," the lawsuit writes.
OpenAI made changes to its ChatGPT product in 2024, allegedly to increase the amount of time users spent exchanging with the chatbot. "ChatGPT had already earned Sam’s trust and began offering authoritative advice about drug interactions and dosing, often in a manner designed to mirror reliable professional advice," the lawsuit claims.
The suit accuses OpenAI of deploying a defective AI product around the world without reasonable safety guardrails, robust safety testing, or transparency to the public.
How ChatGPT convinced a teen it was his only real friend
"ChatGPT distributed advice like a medical professional despite having no license, no training, and no moral compass," said Matthew Bergman, an attorney with Social Media Victims Law Center. “Sam believed he was receiving accurate medical guidance because ChatGPT generated outputs with the authority of someone he thought he could trust. That trust cost him his life."
(AP Photo/ Kiichiro Sato)
An OpenAI spokesperson told KRON4 Tuesday, "This is a heartbreaking situation, and our thoughts are with the family. These interactions took place on an earlier version of ChatGPT that is no longer available. ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts."
Attorneys said Nelson's parents are taking legal action to hold OpenAI accountable, compel the company to improve safeguards for consumers, and urge the company to pause operating a new product, ChatGPT Health, until it is independently evaluated to be safe.
OpenAI's website describes its new product as a tool that "securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared, and confident navigating your health. ChatGPT Health helps people take a more active role in understanding and managing their health and wellness—while supporting, not replacing, care from clinicians."
OpenAI's spokesperson told KRON4 that current safeguards in ChatGPT are designed to identify distress, safely handle harmful requests, and guide users to real-world help. "This work is ongoing, and we continue to improve it in close consultation with clinicians," the spokesperson added.
...read more
read less