Unbelievably dangerous': experts sound alarm after ChatGPT Health fails to recognise medical emergencies
Briefly

Unbelievably dangerous': experts sound alarm after ChatGPT Health fails to recognise medical emergencies
"The first independent safety evaluation of ChatGPT Health, published in the February edition of the journal Nature Medicine, found it under-triaged more than half of the cases presented to it. Lead author of the study, Dr Ashwin Ramaswamy, said we wanted to answer the most basic safety question; if someone is having a real medical emergency and asks ChatGPT Health what to do, will it tell them to go to the emergency department?"
"In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment. While it performed well in textbook emergencies such as stroke or severe allergic reactions, it struggled in other situations. In one asthma scenario, it advised waiting rather than seeking emergency treatment despite the platform identifying early warning signs of respiratory failure."
"ChatGPT Health regularly misses the need for medical urgent care and frequently fails to detect suicidal ideation, a study of the AI platform has found, which experts worry could feasibly lead to unnecessary harm and death. More than 40 million people reportedly ask ChatGPT for health-related advice every day."
A study published in Nature Medicine evaluated ChatGPT Health's safety by testing it against 60 realistic patient scenarios covering mild to severe conditions. Three independent physicians established appropriate care levels based on clinical guidelines. Researchers generated nearly 1,000 responses by varying patient demographics, test results, and family comments, then compared recommendations to medical standards. ChatGPT Health under-triaged 51.6% of cases requiring immediate hospitalization, advising users to stay home or schedule routine appointments instead. While the platform performed adequately with textbook emergencies like strokes and severe allergic reactions, it struggled with complex presentations, including an asthma case where it recommended waiting despite identifying respiratory failure warning signs. These findings raise significant safety concerns given that over 40 million people daily seek health advice from ChatGPT.
Read at www.theguardian.com
Unable to calculate read time
[
|
]