ChatGPT's 'Trusted Contact' will alert loved ones of safety concerns
Briefly

ChatGPT's 'Trusted Contact' will alert loved ones of safety concerns
"Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference. It offers another layer of support alongside the localized helplines already available in ChatGPT."
"The Trusted Contact feature is opt-in. Any adult ChatGPT user can enable it by adding contact details for a fellow adult (18+ globally or 19+ in South Korea) in their ChatGPT account settings. The Trusted Contact must accept the invitation within a week of receiving the request. Users can remove or edit their chosen contact in the settings, and the Trusted Contact can also choose to remove themselves at any time."
"OpenAI says that the notification is intentionally limited and will not share chat details or transcripts with the Trusted Contact. If OpenAI's automated systems detect that a user is talking about harming themselves, ChatGPT will then encourage the user to reach out to their Trusted Contact for help, and let them know the contact may be notified. A small team of specially trained people will then review the situation, according to OpenAI, and ChatGPT will send a brief email, text message, or in-app ChatGPT notification to the Trusted Contact if the conversation is determined to indicate serious safety concerns."
"This builds on the emergency contact feature that was introduced alongside ChatGPT's parental controls in September, after a 16-year-old took his own life following months of confiding in ChatGPT. Meta has also introduced a similar feature that ale"
Adult ChatGPT users can enable an optional Trusted Contact feature by adding an adult emergency contact in account settings. The contact must accept the invitation within a week and can be removed or edited by the user, while the contact can also remove themselves. When automated systems detect that a user may be discussing self-harm or suicide, ChatGPT encourages the user to reach out to the Trusted Contact and informs them that the contact may be notified. A trained review team then determines whether serious safety concerns exist and sends a limited notification via email, text, or in-app message. Chat details and transcripts are not shared, and localized helplines remain available.
Read at The Verge
Unable to calculate read time
[
|
]