
On August 28th, OpenAI announced plans to improve ChatGPT, aiming to more accurately identify signs of psychological and emotional distress in users and respond more appropriately. In a blog post, the company stated, "We've been deeply affected by the recent heartbreaking stories of people using ChatGPT in the midst of serious crises, and we believe it's important to share more."
Reportedly, OpenAI stated in the blog post that upcoming updates will help users stay grounded in reality, even before they exhibit serious signs of self-harm. The company is also exploring ways to connect users with counselors before they reach a critical crisis. This is a significant difference from the current model: Currently, when someone expresses a desire for self-harm, ChatGPT simply recommends contacting a suicide hotline.
Before a user explicitly mentions self-harm or other emotional distress, various signals may be present—signs that could be alleviated with early intervention. OpenAI cites the example of someone claiming to have been up for two nights but still able to drive around the clock because they feel "invincible" (a statement that could be a potential sign of psychological distress).