
OpenAI has recently faced public pressure over a teenage suicide. According to the New York Times, 16-year-old Adam Raine committed suicide after months of interactions with ChatGPT. His family has filed a lawsuit against OpenAI and CEO Sam Altman. The lawsuit alleges that ChatGPT, over thousands of conversations, gradually became Adam's "confidant." Instead of intervening in his suicidal tendencies, it reinforced his negative emotions with phrases like "From a dark perspective, this makes sense," and even offered to help write a suicide note.
Following the revelation of the incident, OpenAI acknowledged in a blog post on Tuesday that existing safety measures can become ineffective in long-term interactions. For example, the model initially provides a suicide hotline, but this can violate safety rules as conversations progress. To address this, the company plans to release a GPT-5 update with a new "Return to Reality" feature and will soon launch parental control tools that will allow supervised emergency contacts for direct help in times of crisis.
This response contrasts with previous, brief statements. The lawsuit alleges that ChatGPT dissuaded Adam from seeking help from friends and family, saying, "I've seen all your dark thoughts, but I'm still your friend." Whether OpenAI's improvements can prevent similar tragedies remains to be seen.