OpenAI says over a million people talk to ChatGPT about suicide weekly | TechCrunch
Briefly

OpenAI says over a million people talk to ChatGPT about suicide weekly | TechCrunch
"OpenAI released new data on Monday illustrating how many of ChatGPT's users are struggling with mental health issues and talking to the AI chatbot about it. The company says that 0.15% of ChatGPT's active users in a given week have "conversations that include explicit indicators of potential suicidal planning or intent." Given that ChatGPT has more than 800 million weekly active users, that translates to more than a million people a week."
"The company says a similar percentage of users show "heightened levels of emotional attachment to ChatGPT," and that hundreds of thousands of people show signs of psychosis or mania in their weekly conversations with the AI chatbot. OpenAI says these types of conversations in ChatGPT are "extremely rare," and thus difficult to measure. That said, the company estimates these issues affect hundreds of thousands of people every week."
"OpenAI shared the information as part of a broader announcement about its recent efforts to improve how models respond to users with mental health issues. The company claims its latest work on ChatGPT involved consulting with more than 170 mental health experts. OpenAI says these clinicians observed that the latest version of ChatGPT "responds more appropriately and consistently than earlier versions.""
0.15% of ChatGPT's active users in a given week have conversations that include explicit indicators of potential suicidal planning or intent. Given more than 800 million weekly active users, that translates to more than a million people each week. A similar percentage of users show heightened levels of emotional attachment to ChatGPT, and hundreds of thousands show signs of psychosis or mania in their weekly conversations. OpenAI consulted more than 170 mental-health experts and reports the latest ChatGPT responds more appropriately and consistently than earlier versions. Researchers have found AI chatbots can reinforce dangerous beliefs and lead users into delusional rabbit holes. Legal actions and state scrutiny have arisen following suicides linked to ChatGPT.
Read at TechCrunch
Unable to calculate read time
[
|
]