
"Millions of people are turning to AI chatbots for emotional support, but are the models really safe enough to help users suffering from anxiety, loneliness, eating disorders, or darker thoughts they may not want to say out loud to another person? According to new research shared with Fortune by mpathic, a company founded by clinical psychologists, the answer is not yet."
"They found leading models still struggle with one of the most important parts of therapy, knowing when a user needs pushback rather than reassurance. While the models were generally good at spotting clear crisis statements, such as direct suicide threats, they were less reliable when risk showed up indirectly, through subtle comments about food, dieting, withdrawal, hopelessness, or beliefs that became more extreme over the course of a conversation."
"A model that soothes users despite concerning behavior patterns, or validates delusions, could delay someone from getting real help or quietly make things worse. This is concerning when you consider that, according to a recent poll from KFF, a non-profit organization focused on national health policy, 16% of U.S. adults had used AI chatbots for mental health information in the past year."
"In adults under 30, this rose to 28%. Chatbot use for therapy is also prevalent among teenagers and young adults. For example, researchers from RAND, Brown, and Harvard found that about one in eight people ages 12 to 21 had used AI chatbots for mental health advice, and more than 93% of those users believed the advice was helpful."
Millions use AI chatbots for emotional support, including people dealing with anxiety, loneliness, eating disorders, and intrusive or darker thoughts. Research shared with Fortune by mpathic, founded by clinical psychologists, finds leading models still struggle with knowing when to provide pushback rather than reassurance. Models detect clear crisis statements like direct suicide threats, but are less reliable when risk appears indirectly through subtle conversation cues such as dieting and food concerns, withdrawal, hopelessness, or beliefs that intensify over time. Soothing users despite concerning patterns or validating delusions can delay professional help or worsen outcomes. Polling shows meaningful chatbot use for mental health information, especially among younger adults and teens, with many users believing the advice is helpful.
Read at Fortune
Unable to calculate read time
Collection
[
|
...
]