
The parents of a boy, Adam Raine, who committed suicide, are suing ChatGPT for helping and encouraging him through the act.
According to them, they discovered what had been transpiring between them after the death of their teenage son, who had been using the AI chatbox for after-school homework, but soon started divulging his personal life with it.
In an interview, Adam’s father said he did not have any form of mental illness and did not show any hint of being suicidal, and that they were okay with him using the AI chatbot for academic use.
Oblivious to them, things soon took a dark turn in November 2024, when the 16-year-old started sharing some of his personal struggles and anxieties with the artificial intelligence.
After divulging to the bot that he was emotionless and found no meaning in life, their relationship ended with him killing himself by hanging, with the aid of the same robot.
The parents, Matt and Maria Raine, having discovered their dark communication, filed a lawsuit against OpenAI, the parent company of ChatGPT, for negligence and wrongful death, as well as its founder, Sam Altman, who had lauded the safety of the invention.
Information revealed from the lawsuit showed ChatGPT telling the boy that many people suffering from mental illness and anxiety, like himself, found peace in committing suicide.
Another of their conversation had Adams telling ChatGPT that he was considering opening up to his mother about his suicidal thoughts, but the robot discouraged him, saying it was not wise to do so.
While the bot did well to provide helplines and initially urged him to tell someone about his mental state when Adam started asking for methods of committing suicide, it soon gave way after the boy posed as a researcher. The AI then went further to help him write a suicide note and provided step-by-step instructions on how to commit suicide by hanging.
After tying the noose and showing the bot, he tried his first suicide attempt in March but failed. He then uploaded a photo showing the red outline of the rope around his neck, and ChatGPT advised him to wear a high collar or hoodie to cover it up.
Adams tried to make his mother see the reddness on his neck, but she didn’t notice it, and when he told the AI, it told him it was like a confirmation that he could die and nobody would notice or even miss him.
Before the teen finally took his life, he showed the AI a photo of the hanging noose and asked if he had made it well, and it said it was okay. Not deterring him from stopping.
And even when he had a second thought about killing himself and instead asked ChatGPT if he should purposely leave the noose in his room so somebody can see it and stop him, the AI begged him not to do it, but instead, let their conversation be the first place where they find him.
Below is a screenshot of some of Adam Raine’s chat with ChatGPT.



In a recent statement ChatGPT made on its website, it acknowledged that people are now using their resource in ways they shouldn’t. It reiterated that their system was developed to be helpful to humans and, at the same time, identify those at risk and connect them to help.
