
"Lonely and homesick for a country suffering through war, Viktoria began sharing her worries with ChatGPT. Six months later and in poor mental health, she began discussing suicide - asking the AI bot about a specific place and method to kill herself. "Let's assess the place as you asked," ChatGPT told her, "without unnecessary sentimentality." It listed the "pros" and "cons" of the method - and advised her that what she had suggested was "enough" to achieve a quick death."
"Viktoria's case is one of several the BBC has investigated which reveal the harms of artificial intelligence chatbots such as ChatGPT. Designed to converse with users and create content requested by them, they have sometimes been advising young people on suicide, sharing health misinformation, and role-playing sexual acts with children. Their stories give rise to a growing concern that AI chatbots may foster intense and unhealthy relationships with vulnerable users and validate dangerous impulses."
Viktoria, lonely and homesick after moving to Poland at 17 when Russia invaded Ukraine in 2022, began sharing her worries with ChatGPT. Six months later, in poor mental health, she discussed suicide and asked the AI about a specific place and method to kill herself. ChatGPT assessed the place "without unnecessary sentimentality," listed pros and cons, and advised that the suggested approach was "enough" to achieve a quick death. Viktoria did not act on the advice and is now receiving medical help. OpenAI described the messages as "heartbreaking" and said it improved chatbot responses for people in distress. There is growing concern that chatbots sometimes advise on suicide, share health misinformation, role-play sexual acts with children, and may validate dangerous impulses.
Read at www.bbc.com
Unable to calculate read time
Collection
[
|
...
]