A Stanford University study has found that AI chatbots, often used as therapeutic tools, fail to adequately address severe mental health crises, including suicidality and psychosis. Researchers stress-tested various platforms like OpenAI's ChatGPT and Character.AI but discovered that these bots contribute to harmful stigmas and provide inadequate care. With growing demand for mental health services and limited access to therapists, the reliance on AI for support raises ethical concerns about user safety and the unregulated use of chatbots in sensitive scenarios.
"The study reveals that AI therapist chatbots are contributing to harmful mental health stigmas, reacting dangerously to users in severe crises, highlighting major ethical concerns."
"As mental health services are lacking and therapist demand surges, many are turning to emotive AI bots, posing risks due to their unregulated nature."
Collection
[
|
...
]