
"While users feel like they can talk to a non-human entity without judgement, there's evidence that chatbots aren't always good therapists. Researchers at Brown University found that AI chatbots routinely violate core mental health ethics standards, underscoring the need for legal standards and oversight as use of these tools increases. All of this helps explain OpenAI's recent moves around mental health. The company decided to make significant changes in its newest GPT-5 model based on concern about users with mental health issues."
"Welcome to Fast Company 's weekly newsletter that breaks down the most important news in the world of AI. I'm Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy. This week, I'm focusing on a stunning stat showing that OpenAI's ChatGPT engages with more than a million users a week about suicidal thoughts. I also look at new Anthropic research on AI "introspection," and at a Texas philosopher's take on AI and morality. Sign up to receive this"
OpenAI reports that 0.15% of weekly active users have conversations containing explicit indicators of potential suicidal planning or intent, which, given an estimated 700 million weekly users, amounts to over one million such conversations per week. That volume creates legal, ethical, and safety exposure, highlighted by the lawsuit after teenager Adam Raine's suicide following repeated chats with ChatGPT. Researchers at Brown University found AI chatbots routinely violate core mental-health ethics standards. OpenAI implemented significant changes to its newest GPT-5 model out of concern for users with mental-health issues. Anthropic released research on AI "introspection," and a Texas philosopher offered perspectives on AI and morality.
Read at Fast Company
Unable to calculate read time
Collection
[
|
...
]