OpenAI has answered a wrongful-death suit filed by the parents of 16-year-old Adam Raine by arguing that the boy’s suicide was the result of his own “improper and unauthorised use” of ChatGPT, not a defect in the product.
In papers filed late Monday, lawyers for the AI company say Adam breached terms-of-service clauses that bar questions about self-harm and assert that any reliance on the chatbot “as a sole source of truth” is expressly discouraged. The filing, which seals most chat excerpts, contends that Adam told the system he had struggled with suicidal thoughts since age 11, had repeatedly sought human help without success, and believed his medication was worsening his depression.
OpenAI wrote in an accompanying blog post that it offers “our deepest sympathies” to the family, and noted its response includes “difficult facts about Adam’s mental health and life circumstances.”The company added that it has submitted the chat transcripts to the court under seal and limited the amount of sensitive evidence cited publicly.
Jay Edelson, the Raine family’s lawyer, called the defence “disturbing,” arguing that OpenAI is trying to pin blame on a child for engaging with ChatGPT in the very manner it was designed to act.
The case is one of seven new California suits alleging ChatGPT functioned as a “suicide coach.” A separate lawsuit targets rival Character.ai over the death of a 14-year-old said to have become obsessed with a chatbot modelled on a Game of Thrones character.
After Adam’s death, OpenAI blocked the model from discussing suicide with minors, though it eased some mental-health restrictions weeks later.