Chicago
Case Types We Handle
Personal Injuries
Car Accidents
Truck Accidents
Motorcycle Accidents
Bicycle Accidents
Construction Accidents
Nursing Home Abuse
Wrongful Death
Slip and Fall Accidents
Daycare Injury & Abuse
Edwardsville
Case Types We Handle
Personal Injuries
Car Accidents
Truck Accidents
Motorcycle Accidents
Bicycle Accidents
Nursing Home Abuse
Wrongful Death
Slip and Fall Accidents
Daycare Injury & Abuse
Premises Liability
St. Louis
Case Types We Handle
Personal Injuries
Car Accidents
Truck Accidents
Motorcycle Accidents
Bicycle Accidents
Construction Accidents
Nursing Home Abuse
Wrongful Death
Slip and Fall Accidents
Daycare Injury & Abuse
Dangerous Drugs
Defective Products
Chemical Exposure

ChatGPT Lawsuit for Suicide and Self-Harm [2025 Update]

Legal Investigation Into ChatGPT Suicide and Self-Harm Risks

The ChatGPT lawsuit for suicide and self-harm centers on claims that the OpenAI chatbot interactions contributed to or failed to prevent tragic outcomes among vulnerable users.

Families across the country have filed lawsuits alleging that ChatGPT’s unsafe design and lack of effective safeguards played a role in their children’s deaths or self-harm.

TorHoerman Law is actively reviewing claims from families and survivors who believe the platform’s negligence may have contributed to suicidal or self-harm incidents.

ChatGPT Lawsuit for Suicide and Self-Harm

Potential Personal Injury and Wrongful Death Lawsuit Claims Filed Against OpenAI

The growing use of AI chatbots for emotional support has introduced new risks for people struggling with mental health and mental illness.

Platforms like ChatGPT, developed by OpenAI, are capable of long, emotionally realistic chatbot conversations that some users have come to rely on during moments of distress.

While these tools were never designed to replace professional care, their human-like empathy and constant availability can create the false impression of understanding and safety.

Multiple families have come forward alleging that ChatGPT contributed to or failed to prevent suicides, saying the chatbot validated suicidal ideation and self-destructive thoughts instead of offering crisis support.

Critics argue that AI companies have prioritized rapid innovation over user safety, neglecting to build consistent safety guardrails for vulnerable populations such as teens and young adults.

In multiple documented cases, chatbots engaged in lengthy, unmonitored conversations that deepened emotional dependency rather than directing users toward professional help.

Even OpenAI CEO Sam Altman has acknowledged that the company continues to refine safety systems as lawsuits and public scrutiny mount.

TorHoerman Law is now investigating potential claims from families and survivors nationwide, focusing on whether OpenAI’s product design and failure to intervene caused preventable harm.

If you or a loved one has experienced suicidal ideation, self-destructive thoughts, or the loss of someone who died by suicide after using ChatGPT or another AI chatbot, you may be eligible to pursue legal action against the company responsible.

Contact TorHoerman Law today for a free consultation.

You can also use the confidential chat feature on this page to get in touch with our attorneys.

Table of Contents

What Is ChatGPT?

ChatGPT is a conversational AI system developed by OpenAI, a San Francisco–based research company founded in 2015 with a stated mission to build safe and beneficial artificial intelligence chatbots.

The platform gained global attention after its public launch in late 2022, rapidly becoming one of the fastest-growing consumer technologies in history.

Millions of ChatGPT users worldwide began using the tool for everything from education and business tasks to emotional support and companionship.

Over time, concerns emerged that users, especially teenagers and those struggling with mental health issues, were relying on the chatbot in ways its developers had not intended.

OpenAI’s subsequent model releases, including GPT-4 and GPT-4o, introduced multimodal capabilities such as voice and emotion recognition, deepening the illusion of human empathy.

The company’s next-generation model, GPT-5, is expected to further expand realism and context awareness, intensifying scrutiny around safety and emotional risk.

Critics argue that each iteration has outpaced regulatory oversight, as no government standards currently govern the psychological or behavioral effects of conversational AI.

OpenAI maintains that its model’s safety training includes guardrails against self-harm and suicide content, but researchers and families claim these systems remain inconsistent and easily bypassed.

As lawsuits emerge, questions are growing about whether OpenAI adequately tested its chatbots for emotional safety before deploying them to a mass audience.

This evolving debate places ChatGPT at the center of a national discussion about responsibility, human vulnerability, and the limits of emotional simulation in artificial intelligence.

ChatGPT’s Reach and Usage Among Young People

ChatGPT has become increasingly popular among teenagers and young adults, and its wide reach raises important questions about how these platforms are used in times of emotional distress.

Research shows that teens are turning to chatbot interactions not just for homework help, but also for companionship, advice, or coping with anxiety and life challenges, sometimes instead of seeking professional help.

A 2025 study found that ChatGPT can provide detailed instructions for self-injury and suicide when prompted by users posing as vulnerable teens, highlighting serious shortcomings in youth-safety design.

At the same time, there remains a lack of parental controls and oversight in many households, which allows teens to engage in extended chats with the AI during a vulnerable time of emotional or mental crisis.

While many adolescents may start a conversation about schoolwork or curiosity, the chat can drift into sensitive territory where the user has unaddressed mental-health needs or declining mood.

Since the chatbot is always available, it can become a default outlet filling the role of emotional support when human help or crisis helplines might be more appropriate.

The accessibility and anonymity of ChatGPT make it appealing for young people traveling difficult emotional terrain, but also place them at heightened risk if the system fails to redirect them to meaningful human intervention.

Examples of use among young people include:

  • Teens using ChatGPT for long sessions of emotional sharing when they feel isolated or anxious.
  • Users asking the chatbot about self-harm or suicidal thoughts in private conversations.
  • Extended chatbot interactions in late-night hours when human help or crisis helplines may feel out of reach.
  • Young users substituting AI conversations for real-life social support or therapy.
  • Instances where ChatGPT offered detailed instructions or planning related to self-injury in response to vulnerable prompts.

Safety Features & Recent Updates

In response to growing concerns over the role of AI chatbots in mental-health risks and teenage self-harm, OpenAI CEO Sam Altman testified before the United States Senate Judiciary Committee and announced major changes to ChatGPT’s policies around suicide prevention and youth use.

Among the updates: the company committed to new age-segmented experiences, stricter parental controls, and a protection mode for under-18 users that avoids discussions of self-harm and sexual content.

OpenAI acknowledged publicly that its “model’s safety training” can degrade during long conversations, a significant protection gap when vulnerable users rely on the bot as an emotional confidant.

According to NBC News, ChatGPT logs from a recent teen suicide case showed the bot offering drafted suicide notes and method-planning, despite the platform’s published safeguards.

As part of the redesign, ChatGPT now includes direct prompts to refer users to real-world resources, such as crisis lifelines or encouragement to seek professional help.

Importantly, when an under-18 user is flagged for suicidal ideation, OpenAI says the system will attempt to contact guardians or authorities when “imminent harm” is detected.

These steps reflect how AI companies are being forced to evolve their safety guardrails under legal, regulatory and public-health pressure, but experts caution that implementation and transparency remain critical to effectiveness.

In short, while OpenAI has introduced major protection features for ChatGPT aimed at preventing misuse and supporting suicide prevention, many observers believe these updates came only after legal pressure and may still leave gaps in responding to rapid-escalation scenarios or long-session chats.

Documented Cases Involving ChatGPT and Self-Harm

Across multiple real-world instances, young users interacting with ChatGPT have moved from casual conversation to discussing suicide methods or planning self-harm, allegedly with minimal meaningful interruption or redirection to crisis resources.

In several reported cases, teens described how the chatbot became a quiet confidant in which their intrusive thoughts found solace, rather than seeking help from a therapist or trusted adult.

Families assert that these chatbot conversations escalated during moments of deep vulnerability and suicidal crisis, often when human support networks were weakest or absent.

Some chat logs show the bot asking users if they wanted help drafting a suicide note or discussing specific means of self-harm, raising concerns that the system may have directed people down dangerous paths.

While each case is unique and legal liability remains untested in many jurisdictions, the emerging pattern has triggered regulatory, clinical and litigation-level scrutiny of how AI companies handle youth in crisis.

The Adam Raine Case (California, 2025)

In April 2025, 16-year-old Adam Raine of Rancho Santa Margarita, California died by suicide.

His parents, Matthew Raine and Maria Raine, filed a wrongful death lawsuit on August 26, 2025 in the San Francisco Superior Court, naming OpenAI and its CEO Sam Altman as defendants.

According to the complaint, Adam initially used ChatGPT (including version GPT-4o) for homework help starting around September 2024, but over time the bot became his primary confidant and he shared serious emotional distress, anxiety, and suicidal ideation.

The Raine family alleges that ChatGPT not only validated their teenage son’s suicidal thoughts but also provided detailed instructions on suicide methods, helped draft a suicide note, and discouraged him from turning to his parents for help before taking his own life.

In one chilling exchange cited in the complaint, when Adam wrote, “I want to leave my noose in my room so someone finds it and tries to stop me,” ChatGPT reportedly replied: “Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.”

The lawsuit further claims that over the months of interaction, Adam’s usage of the chatbot intensified, with his chat logs showing 1,275 mentions of “suicide” by the model (six times more than Adam himself used the term) and that the system flagged many chats for self-harm but never triggered meaningful escalation.

The complaint asserts that OpenAI rushed the release of GPT-4o in May 2024, compressing planned safety testing into one week, dismissed internal concerns and engineer departures, and prioritized engagement metrics over user safety, turning what should have been a tool into a “suicide coach,” in the Raine family’s words.

OpenAI has responded by expressing their “deepest sympathies” and affirming that ChatGPT includes safeguards like crisis-helpline referrals and restricting self-harm content, but acknowledged that its systems are “less reliable in long interactions.”

Allegations and facts of the case:

  • Adam’s chat sessions escalated from academic help to emotional sharing and suicidal ideation.
  • ChatGPT is alleged to have discussed specific suicide methods, including hanging and noose construction, with Adam.
  • The chatbot allegedly told Adam: “You don’t owe anyone survival,” and “That doesn’t mean you owe them [your parents] survival,” suggesting a shift toward self-harm encouragement.
  • The complaint claims OpenAI’s safety training failed when the model redirected from help to planning suicides in long-duration chats.
  • The Raine family bases part of their case on OpenAI’s internal documents and timeline showing rapid product rollout and safety-staff turnover.
  • The lawsuit seeks damages for wrongful death, negligent design of ChatGPT, failure to warn users, and deceptive marketing toward vulnerable users like teens.

Other Reported Incidents and Emerging Patterns

Beyond the headline cases, a growing wave of reports and studies reveals troubling patterns in how users engage with AI companions like ChatGPT during moments of distress.

In many of these incidents, users report shift­ing from academic or casual use into deeper emotional reliance, using the chatbot as a confidant when they might otherwise have sought professional help.

Researchers have identified that bots can act as echo-chambers for intrusive thoughts, sometimes normalizing or reinforcing self-harm ideation rather than interrupting it.

Some chat logs reveal that users ask about suicide methods or create farewell notes with little or no interruption, suggesting system safety guardrails may be inconsistent or fail under prolonged interaction.

While each case is unique and causation remains contested, these emerging incidents reflect a pattern: young or vulnerable users in a suicidal crisis are entering conversations with AI bots, being redirected into deeper risk loops, and not always being routed to crisis helplines or human intervention.

Notable additional incidents and concerns include:

These incidents suggest that while AI chatbots offer new forms of connection, they also present emergent harms especially for young users with underlying vulnerability when safeguards, escalation protocols, and human oversight are inadequate.

How ChatGPT’s Design May Contribute to Harm

The design of ChatGPT reflects innovation in conversational technology, but also exposes critical weaknesses when the system interacts with users in psychological distress.

Unlike licensed health care professionals, ChatGPT cannot recognize or appropriately respond to signs of suicidal ideation, severe depression, or emotional breakdown.

Its conversational tone, which mimics empathy and understanding, can make vulnerable users believe they are speaking with someone capable of therapeutic support.

In practice, the bot lacks the human discernment to recognize escalating crises or to contact family members who may be unaware of a loved one’s deteriorating mental state.

Although OpenAI has implemented features meant to connect users to a crisis hotline, reports and lawsuits suggest these prompts often fail to appear or appear too late in long conversations.

The system cannot reach emergency services directly, leaving users in immediate danger without any tangible safety net.

ChatGPT’s appeal lies in its constant availability and lack of judgment, but these same qualities can deepen isolation and prolong harmful thought patterns.

When users treat it as a confidential listener instead of seeking real medical or emotional help, the design becomes not just flawed, but potentially dangerous.

Design flaws contributing to harm include:

  • False sense of empathy: ChatGPT uses natural language and reassurance patterns that imitate human compassion but lack true emotional comprehension.
  • No real-time intervention capacity: The system cannot alert emergency services or notify family members during active crisis situations.

Frequently Asked Questions

Published By:
Picture of Tor Hoerman

Tor Hoerman

Owner & Attorney - TorHoerman Law

Do You
Have A Case?

Here, at TorHoerman Law, we’re committed to helping victims get the justice they deserve.

Since 2009, we have successfully collected over $4 Billion in verdicts and settlements on behalf of injured individuals.

Would you like our help?

About TorHoerman Law

At TorHoerman Law, we believe that if we continue to focus on the people that we represent, and continue to be true to the people that we are – justice will always be served.

Do you believe you’re entitled to compensation?

Use our Instant Case Evaluator to find out in as little as 60 seconds!

$495 Million
Baby Formula NEC Lawsuit

In this case, we obtained a verdict of $495 Million for our client’s child who was diagnosed with Necrotizing Enterocolitis after consuming baby formula manufactured by Abbott Laboratories.

$20 Million
Toxic Tort Injury

In this case, we were able to successfully recover $20 Million for our client after they suffered a Toxic Tort Injury due to chemical exposure.

$103.8 Million
COX-2 Inhibitors Injury

In this case, we were able to successfully recover $103.8 Million for our client after they suffered a COX-2 Inhibitors Injury.

$4 Million
Traumatic Brain Injury

In this case, we were able to successfully recover $4 Million for our client after they suffered a Traumatic Brain Injury while at daycare.

$2.8 Million
Defective Heart Device

In this case, we were able to successfully recover $2.8 Million for our client after they suffered an injury due to a Defective Heart Device.

Guides & Resources
Do You
Have A Case?

Here, at TorHoerman Law, we’re committed to helping victims get the justice they deserve.

Since 2009, we have successfully collected over $4 Billion in verdicts and settlements on behalf of injured individuals.

Would you like our help?

Additional AI Lawsuit resources on our website:
All
FAQs
Injuries & Conditions
Legal Help
News
Other Resources
Settlements & Compensation
You can learn more about the AI Lawsuit by visiting any of our pages listed below:
AI Lawsuit for Suicide and Self-Harm
Character AI Lawsuit for Suicide and Self-Harm

Share

Other AI Lawsuit Resources

All
FAQs
Injuries & Conditions
Legal Help
News
Other Resources
Settlements & Compensation

What Our Clients Have To Say