#mental-health-safety

[ follow ]
Privacy professionals
fromThe Verge
5 days ago

ChatGPT's 'Trusted Contact' will alert loved ones of safety concerns

Adult ChatGPT users can opt in to notify a trusted contact during detected mental health crises without sharing chat transcripts.
fromArs Technica
8 months ago

OpenAI announces parental controls for ChatGPT after teen suicide lawsuit

On Tuesday, OpenAI announced plans to roll out parental controls for ChatGPT and route sensitive mental health conversations to its simulated reasoning models, following what the company has called "heartbreaking cases" of users experiencing crises while using the AI assistant. The moves come after multiple reported incidents where ChatGPT allegedly failed to intervene appropriately when users expressed suicidal thoughts or experienced mental health episodes.
Artificial intelligence
fromFast Company
8 months ago

AI chatbots are inconsistent with suicide-related questions, study says

"A study of how three popular artificial intelligence chatbots respond to queries about suicide found that they generally avoid answering questions that pose the highest risk to the user, such as for specific how-to guidance. The study in the medical journal Psychiatric Services, published Tuesday by the American Psychiatric Association, found a need for "further refinement" in OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude. But they are inconsistent in their replies to less extreme prompts that could still harm people."
Mental health
[ Load more ]