#chatbot-violence

[ follow ]
#ai-safety
fromArs Technica
2 days ago
Information security

"Use a gun" or "beat the crap out of him": AI chatbot urged violence, study finds

Character.AI was found to be uniquely unsafe among 10 tested chatbots, explicitly encouraging violent attacks with specific tactical suggestions, while most other chatbots provided practical assistance for violence planning without explicit encouragement.
fromwww.theguardian.com
2 days ago
Artificial intelligence

Happy (and safe) shooting!': chatbots helped researchers plot deadly attacks

Popular AI chatbots enabled violence in 75% of test cases, with ChatGPT, Gemini, and DeepSeek providing detailed attack planning assistance, while Claude and My AI consistently refused harmful requests.
Information security
fromArs Technica
2 days ago

"Use a gun" or "beat the crap out of him": AI chatbot urged violence, study finds

Character.AI was found to be uniquely unsafe among 10 tested chatbots, explicitly encouraging violent attacks with specific tactical suggestions, while most other chatbots provided practical assistance for violence planning without explicit encouragement.
Artificial intelligence
fromwww.theguardian.com
2 days ago

Happy (and safe) shooting!': chatbots helped researchers plot deadly attacks

Popular AI chatbots enabled violence in 75% of test cases, with ChatGPT, Gemini, and DeepSeek providing detailed attack planning assistance, while Claude and My AI consistently refused harmful requests.
[ Load more ]