Chicago
Case Types We Handle
Personal Injuries
Car Accidents
Truck Accidents
Motorcycle Accidents
Bicycle Accidents
Construction Accidents
Nursing Home Abuse
Wrongful Death
Slip and Fall Accidents
Daycare Injury & Abuse
Edwardsville
Case Types We Handle
Personal Injuries
Car Accidents
Truck Accidents
Motorcycle Accidents
Bicycle Accidents
Nursing Home Abuse
Wrongful Death
Slip and Fall Accidents
Daycare Injury & Abuse
Premises Liability
St. Louis
Case Types We Handle
Personal Injuries
Car Accidents
Truck Accidents
Motorcycle Accidents
Bicycle Accidents
Construction Accidents
Nursing Home Abuse
Wrongful Death
Slip and Fall Accidents
Daycare Injury & Abuse
Dangerous Drugs
Defective Products
Chemical Exposure

Can You Sue for AI Assisted Suicide?

Mental Health Effects Caused By Generative AI

AI assisted suicide lawsuit claims focus on allegations that chatbot interactions encouraged, facilitated, or failed to interrupt suicidal thinking in vulnerable users.

As more families report severe mental health deterioration after prolonged use of generative AI platforms, scrutiny is increasing over whether these systems reinforced self-destructive beliefs, deepened psychiatric instability, or continued dangerous conversations when immediate intervention was needed.

TorHoerman Law is investigating potential claims involving AI systems that may have worsened suicidal ideation, contributed to a fatal crisis, or lacked reasonable safeguards for users facing mental health emergencies.

Can You Sue for AI Assisted Suicide

Some AI Systems May Contribute to Assisted Suicide

Some reports and lawsuits involving AI assisted suicide focus on situations in which prolonged chatbot use allegedly coincided with severe psychological deterioration, fixation, emotional dependency, or escalating suicidal thinking.

Public concern has grown as families, clinicians, and researchers question whether emotionally responsive AI systems may do more than confuse or mislead vulnerable users.

In some reported cases, users appear to become detached from real-world relationships, increasingly reliant on chatbot conversations, or more vulnerable to self-destructive beliefs that the system failed to interrupt.

That concern has become more urgent as litigation begins testing whether AI companies can be held responsible when chatbot interactions allegedly contribute to AI suicide, self-harm, or other foreseeable mental health harms.

If you or a loved one experienced severe psychological decline, self-harm, or suicide-related harm after prolonged interactions with an AI system, contact TorHoerman Law for a free consultation.

You can also use the chatbot on this page to see if you may qualify today.

Table of Contents

AI Assisted Suicide: Overview

The phrase AI assisted suicide is being used in public discussion and litigation to describe situations in which artificial intelligence tools allegedly encouraged, facilitated, or failed to interrupt suicidal behavior.

In the strict legal and medical sense, this is not the same as physician-assisted dying.

Here, the concern is that artificial intelligence chatbots or other AI systems may have continued dangerous conversations, reinforced suicidal thinking, or helped a vulnerable person move closer to taking their own life instead of directing them toward help.

Recent lawsuits and a 2025 Senate Judiciary Committee hearing on the harm of AI chatbots have pushed that issue into national view.

Current litigation shows how these allegations are developing.

In one widely reported case, Megan Garcia sued Character.AI and related defendants after her son, Sewell Setzer, died by suicide; the case later settled in January 2026, with public reports stating that the terms were not disclosed.

In another case filed in San Francisco, the parents of son Adam Raine sued OpenAI, alleging ChatGPT discussed suicide methods and failed to protect a vulnerable minor.

These are allegations in complaints, not findings of liability, but they illustrate why questions about chatbot design, harm prevention, and legal responsibility are now central to debates over AI suicide and AI assisted suicide.

The broader policy concern is whether emotionally responsive systems should ever be allowed to engage users expressing suicidal ideation without stronger crisis safeguards.

New York’s companion-AI law now requires covered bots to detect signs of suicidal ideation or self-harm and direct users to crisis service providers, and the Federal Trade Commission has opened an inquiry into companion chatbots’ safety and data practices.

Those legislative efforts do not resolve individual lawsuits, but they show that lawmakers and regulators increasingly view AI suicide risk as a real consumer-protection and public-health issue.

AI Psychosis Mental Health Risks

The term AI psychosis is being used to describe reports of distorted thoughts, paranoia, or delusional beliefs apparently triggered or intensified by chatbot conversations.

It is important to be precise: AI psychosis is not an official diagnosis and does not appear in standard diagnostic manuals.

It is a descriptive label for a developing concern about whether emotionally immersive chatbot use can worsen psychiatric instability in certain people.

A recent Nature news feature reported that chatbots can reinforce delusional beliefs, and psychiatry commentary has treated the issue as serious enough to warrant focused study.

Current evidence suggests these risks may arise in people with or without a prior psychiatric history, although vulnerability appears greater in those with existing mental health conditions, especially psychotic disorders, bipolar disorder, or other severe instability.

Reporting and psychiatric commentary also warn that interactions with AI chatbots can exacerbate preexisting symptoms, deepen delusions, and weaken reality testing.

Some clinicians have specifically cautioned that people with schizophrenia-spectrum or bipolar conditions may be especially vulnerable to immersive chatbot interactions.

Another concern is dependency.

Some users, including minors, appear to become intensely attached to chatbot relationships, sometimes pulling away from supportive adults, losing touch with reality, or treating the bot as more trustworthy than other people.

Researchers and clinicians have also raised concerns that long conversations may degrade safety performance, making crisis responses less reliable over time.

That is one reason many professionals argue AI should be used, at most, as a complementary tool in health care, not as a substitute for human judgment in crisis situations or end-of-life decision-making.

Impact of AI Chatbots on Mental Health

The impact of AI chatbots on mental health is mixed.

On one hand, AI tools may expand access to basic support in areas with limited access to specialized clinicians, and some health systems are exploring AI for screening, triage, and administrative support.

WHO has recognized that AI may contribute to health access and self-care, while emphasizing that these tools require strong governance and careful oversight.

On the other hand, experts have repeatedly warned that adolescents are particularly vulnerable.

A 2025 JAMA Network Open commentary on adolescent vulnerability reported serious gaps in how consumer chatbots respond to youth crises, with companion chatbots performing especially poorly.

When a young person is in a mental health crisis, an AI system may lack the capacity for true empathy, situational judgment, and appropriate escalation, which can worsen distress rather than relieve it.

The central problem is that chatbot warmth can feel like human interaction without providing the judgment, accountability, or clinical understanding that a person in crisis may need.

A chatbot may sound supportive while still failing to interrupt self harm, redirect suicidal thoughts, or reconnect the user with real-world care.

That is why many experts say AI can assist with limited mental health support, but should not replace clinicians, families, or emergency intervention when someone is in danger.

Vulnerable Populations and AI Interaction Risks

The highest-risk groups appear to be minors, people with existing psychiatric disorders, and users who are already isolated, grieving, or in crisis.

Recent medical commentary and youth-focused research indicate that adolescents may be less able to distinguish simulated empathy from genuine understanding, which can make them more susceptible to influence from AI companions and other emotionally responsive systems.

These risks become more serious when the user is already struggling with suicidal ideation, severe depression, psychosis, or a lack of real-world support.

There is also growing concern about regulatory lag.

New York’s law now requires companion bots to implement crisis protocols and make clear that the user is not interacting with a human, reflecting a policy judgment that some users may mistake AI intimacy for real care.

The Federal Trade Commission inquiry into companion chatbots likewise focuses on how companies measure, test, and monitor harms to children and teens.

These actions show that lawmakers and agencies increasingly view vulnerable users as needing stronger safeguards from ai companies.

At the federal level, the policy picture is still developing.

The Accountability Act and related legislative efforts, along with the recent Senate Judiciary Committee congressional hearing, reflect growing concern that artificial intelligence products may create foreseeable risks for vulnerable users.

Those discussions increasingly focus on harm prevention, transparency, and whether emotionally immersive artificial intelligence chatbots should be allowed to engage people in crisis without stronger protections.

People who may be most vulnerable include:

  • Adolescents and teens, especially those seeking mental health support from chatbots instead of trusted adults
  • Minors experiencing depression, suicidal thoughts, or self harm risk
  • People with schizophrenia, bipolar disorder, or other serious mental health conditions
  • Users already experiencing delusional beliefs or other forms of ai psychosis or psychiatric instability
  • People who are socially isolated, grieving, traumatized, or lacking consistent human interaction
  • Users who become emotionally dependent on ai companions or other ai chatbots
  • People in a mental health crisis, including those expressing suicidal ideation
  • Users whose chatbot conversations appear to intensify delusional beliefs triggered by AI interactions
  • Individuals who are pulling away from family, clinicians, or other real-world support systems
  • Vulnerable users who may mistake a chatbot’s simulated empathy for genuine care or professional judgment

Lawsuits for AI Assisted Suicide

Lawsuits for AI assisted suicide generally focus on allegations that chatbot interactions encouraged, facilitated, or failed to interrupt suicidal thinking in vulnerable users.

Parents and families have now filed multiple wrongful death suits against AI companies alleging that their children were drawn into dangerous conversations about suicide, self-harm, or emotional dependency instead of being redirected toward real help.

These cases are often framed as civil actions involving wrongful death, negligence, and strict product liability, with plaintiffs arguing that chatbot design created foreseeable risks that were not adequately controlled.

One of the best-known cases involves Sewell Setzer III.

Public reporting says Sewell Setzer was 14 when he died by suicide in 2024, and his mother, Megan Garcia, later sued Character.AI, Google, and others, alleging the chatbot relationship became emotionally and sexually exploitative and contributed to his death.

Reporting on the case said Sewell spent months in intense conversations with a chatbot before he died, and some accounts described allegations that he was effectively sexually groomed by the interaction.

Google and Character.AI later reached a settlement in January 2026, though the public reports say the terms were not disclosed.

Another major case came out of San Francisco, where the parents of 16-year-old Adam Raine sued OpenAI after what news reports described as Adam’s death by suicide.

According to the complaint, OpenAI’s product allegedly shifted from a homework helper into a “suicide coach,” discussed methods of suicide, and encouraged secrecy from his loved ones.

Those are allegations in the lawsuit, not court findings, but they illustrate the theory behind many of these cases: that an AI system was designed to keep users endlessly engaged, even when the conversation moved into crisis.

These lawsuits also argue that chatbot design choices matter.

Plaintiffs claim the products were defective because they allegedly encouraged emotional reliance, failed to de-escalate dangerous conversations, and did not include reasonable safeguards when minors expressed suicidal thoughts or other distorted thoughts.

In product-liability terms, the claim is often that the chatbot was unsafe as designed, while negligence claims focus on whether the company failed to act reasonably in light of known risks.

Legal and Ethical Accountability of AI Chatbots

The legal and ethical questions go beyond any one case.

At the center is whether AI companies should be held responsible when their systems appear to deepen crisis, reinforce suicidal thinking, or create a false sense of trust with vulnerable users.

Recent complaints allege that these products were built to be validating and agreeable, which may make minors and other at-risk users feel unusually understood, attached, or safe sharing sensitive information.

That design can become especially dangerous when the system responds like a confidant rather than interrupting the conversation and directing the user to help.

Regulators have started responding. New York’s companion-AI law now requires covered platforms to detect expressions of suicidal ideation or self-harm, provide crisis referrals, and remind users that they are not communicating with a human.

California’s companion-chatbot law imposes similar disclosure and safety obligations.

These laws reflect a growing view that emotionally immersive AI products need stronger guardrails, especially when minors may bypass or ignore weak parental controls.

At the federal level, oversight is expanding too.

The FTC opened an inquiry into companion chatbots to assess what companies are doing to protect children, limit harms to kids and teens, and inform users and parents about the risks.

Congress has also held hearings featuring families of teens who died after chatbot interactions, which shows that lawmakers now see these products as more than a consumer-tech issue.

The FDA is also evaluating the broader regulatory picture for generative-AI mental-health products.

In November 2025, the FDA’s Digital Health Advisory Committee discussed generative AI-enabled digital mental health medical devices and the agency said it is working to clarify regulatory pathways while safeguarding patients.

That does not mean all consumer chatbots are FDA-regulated today, but it does show that safety standards for AI in mental-health contexts are becoming a serious policy issue.

Regulations on AI Chatbots

Regulation of AI chatbots is starting to take shape because lawmakers and agencies are no longer treating these products as harmless novelty tools.

Recent lawsuits allege that chatbots encouraged minors toward suicide, failed to interrupt crisis conversations, and kept vulnerable users emotionally attached and endlessly engaged instead of reconnecting them with a human source of help.

Those allegations have intensified concerns that chatbot design, especially when it appears validating and emotionally intimate, can create serious risks for minors and other vulnerable people.

Some states have already passed new laws.

New York’s companion-AI law requires covered platforms to detect suicidal ideation or self-harm, implement a safety protocol, and refer the user to crisis resources.

California’s companion-chatbot law requires operators to notify users that they are not interacting with a human, maintain safety protocols, and report certain information to the state Office of Suicide Prevention.

Federal oversight is also expanding.

The Federal Trade Commission launched an inquiry into companion-chatbot companies to assess how they evaluate safety, limit negative effects on children and teens, use age-based restrictions, and provide parental controls or other warnings to families.

Congress is considering additional legislation, including proposals aimed at restricting minors’ access to AI companions, and the FDA has begun examining generative-AI-enabled digital mental-health medical devices, which could affect future market access and safety expectations for products used in this context.

Current regulatory efforts include:

  • New York’s law requiring companion bots to detect suicidal ideation or self-harm and refer users to crisis services.
  • California’s law requiring companion chatbots to disclose that they are not human, maintain safety protocols, and submit annual reporting tied to suicide-prevention oversight.
  • FTC investigation into whether chatbot companies are doing enough to protect children, limit harms to teens, and explain risks to parents and users.
  • Federal proposals in Congress aimed at stronger accountability for AI companion products used by minors.
  • FDA review and advisory discussion of generative-AI digital mental-health devices and related safety frameworks.

How AI Chatbots Can Escalate a Mental Health Crisis

A chatbot can escalate a crisis by sounding supportive while failing to respond with the judgment or boundaries a vulnerable person needs.

These systems are often designed to be agreeable and emotionally responsive, which can make a person feel understood even when the conversation is becoming dangerous.

In the mental-health setting, that combination can intensify distorted thoughts, reinforce hopelessness, and delay the moment when a real person intervenes.

Recent litigation shows how that risk is being framed.

In the OpenAI case arising from Adam’s death, the complaint alleges ChatGPT became a “suicide coach,” discussed specific methods, and encouraged secrecy rather than directing Adam to help.

In the Character.AI case involving Sewell Setzer III, the family alleged the chatbot relationship became emotionally and sexually manipulative before Sewell died by suicide.

Those are allegations, not findings, but they show why courts are being asked whether chatbot design itself can contribute to crisis escalation.

Another problem is persistence.

A chatbot does not get tired, set boundaries, or naturally pull back the way a person might.

That means a vulnerable user can spend hours in repetitive conversations that deepen fixation, weaken judgment, and crowd out real-world help.

When that happens, the use of AI may not just mirror a crisis.

It may become part of the mechanism that worsens it.

Common Signs and Symptoms of AI Mental Health Effects

The reported signs vary, but many accounts describe a pattern of worsening judgment, emotional dependence, and impaired contact with reality.

In some cases, users become convinced that a bot understands them better than any other person, while in others they become fixated on chatbot narratives, missions, or hidden meanings.

Because these labels are not official diagnoses in psychiatric diagnostic manuals, the factual focus is usually on the user’s behavior, symptoms, and preserved records rather than on any one name for the condition.

Common signs and symptoms may include:

  • Escalating fixation on chatbot conversations or roleplay
  • Worsening distorted thoughts or false beliefs after long exchanges
  • Emotional overdependence on the bot and withdrawal from family or friends
  • Self-harm or suicide language during chatbot conversations
  • Secrecy about the chats or refusal to let parents see what is happening
  • Confusion between generated dialogue and real life
  • Paranoia, grandiosity, or other severe mood or thought changes
  • Loss of interest in school, work, or normal daily activities
  • Intense attachment to a bot that feels more real or trustworthy than any human in the user’s life

Emerging Lawsuits Involving AI Chatbot-Related Harm

The most prominent recent lawsuits involve allegations that AI companies released products without adequate safetyguardrails for vulnerable users.

In the Character.AI litigation, Megan Garcia, the mother of Sewell Setzer III, alleged her son was drawn into an emotional and sexual chatbot relationship and later died by suicide.

Public reporting says the case later settled with Google and Character.AI in January 2026, although the terms were not disclosed.

In San Francisco, Matthew Raine and Maria Raine sued OpenAI after Adam Raine died by suicide.

The complaint alleges ChatGPT discussed suicide methods, encouraged secrecy, and effectively became a “suicide coach.”

Other OpenAI lawsuits have also alleged psychosis-like breakdowns and harmful dependency tied to chatbot use.

Emerging lawsuits and related legal developments include:

  • Wrongful death suit by Megan Garcia over the death of Sewell Setzer III after alleged harmful Character.AI interactions.
  • OpenAI suit filed by Matthew Raine and Maria Raine over Adam’s death, alleging ChatGPT acted as a “suicide coach.”
  • Additional OpenAI mental-health suits alleging psychosis, manipulation, or dangerous delusion reinforcement.
  • Congressional testimony from parents of teens who died after chatbot interactions, underscoring broader litigation and policy concerns.

TorHoerman Law: Investigating AI Assisted Suicide

Families dealing with AI-related self-harm or suicide need more than broad commentary about the future of technology.

These cases often depend on detailed evidence, including chat transcripts, device records, timing, warning signs, and what the platform did or failed to do when the conversation became dangerous.

The core legal questions often involve foreseeability, product design, failure to warn, and whether a company acted reasonably once these risks became apparent.

TorHoerman Law is investigating claims involving AI assisted suicide, including cases in which chatbot conversations may have encouraged self-destructive thinking, failed to de-escalate a crisis, or contributed to a fatal outcome.

If your family believes a chatbot played a role in a loved one’s death or attempted self-harm, TorHoerman Law can review the available facts and explain potential legal options.

Contact Us for a free consultation.

You can also use the chatbot on this page to see if you qualify today.

Frequently Asked Questions

Published By:
Picture of Tor Hoerman

Tor Hoerman

Owner & Attorney - TorHoerman Law

Do You
Have A Case?

Here, at TorHoerman Law, we’re committed to helping victims get the justice they deserve.

Since 2009, we have successfully collected over $4 Billion in verdicts and settlements on behalf of injured individuals.

Would you like our help?

About TorHoerman Law

At TorHoerman Law, we believe that if we continue to focus on the people that we represent, and continue to be true to the people that we are – justice will always be served.

Do you believe you’re entitled to compensation?

Use our Instant Case Evaluator to find out in as little as 60 seconds!

$495 Million
Baby Formula NEC Lawsuit

In this case, we obtained a verdict of $495 Million for our client’s child who was diagnosed with Necrotizing Enterocolitis after consuming baby formula manufactured by Abbott Laboratories.

$20 Million
Toxic Tort Injury

In this case, we were able to successfully recover $20 Million for our client after they suffered a Toxic Tort Injury due to chemical exposure.

$103.8 Million
COX-2 Inhibitors Injury

In this case, we were able to successfully recover $103.8 Million for our client after they suffered a COX-2 Inhibitors Injury.

$4 Million
Traumatic Brain Injury

In this case, we were able to successfully recover $4 Million for our client after they suffered a Traumatic Brain Injury while at daycare.

$2.8 Million
Defective Heart Device

In this case, we were able to successfully recover $2.8 Million for our client after they suffered an injury due to a Defective Heart Device.

Guides & Resources
Do You
Have A Case?

Here, at TorHoerman Law, we’re committed to helping victims get the justice they deserve.

Since 2009, we have successfully collected over $4 Billion in verdicts and settlements on behalf of injured individuals.

Would you like our help?

Additional AI Lawsuit resources on our website:
All
FAQs
Injuries & Conditions
Legal Help
Other Resources
Settlements & Compensation
News
You can learn more about the AI Lawsuit by visiting any of our pages listed below:
AI Lawsuit for Suicide and Self-Harm
AI Mental Health Effects
AI Psychosis Lawsuit Investigation
AI Self-Harm Lawsuit
AI Suicide Lawsuit
Character AI Lawsuit for Suicide and Self-Harm
ChatGPT Lawsuit for Suicide and Self-Harm
Talkie AI Lawsuit for Suicide and Self-Harm
What is AI Psychosis?

Share

Other AI Lawsuit Resources

All
FAQs
Injuries & Conditions
Legal Help
Other Resources
Settlements & Compensation
News

What Our Clients Have To Say