
"Asking a general-use chatbot for health help used to seem like a shot in the dark-just two years ago, a study found that ChatGPT could diagnose only 2 in 10 pediatric cases correctly. Among Google Gemini's early recommendations were eating one small rock a day and using glue to help cheese stick to pizza. Last year, a nutritionist ended up hospitalized after taking ChatGPT's advice to replace salt in his diet with sodium bromide."
""When talking about something designed specifically for health care, it should be trained on health care data,"says Torrey Creed, an associate professor of psychiatry researching A.I. at the University of Pennsylvania. This means that a chatbot shouldn't have the option to pull from unreliable sources like social media. The second difference, she says, is ensuring that users' private data isn't sold or used to train models. Chatbots created for the health care sector are required to be HIPAA compliant."
General-purpose chatbots have produced unsafe and inaccurate medical guidance, including low diagnostic accuracy and harmful recommendations. A.I. companies are releasing health-specific chatbots for consumers and clinicians, including services that can connect to medical records and are already in hospital use. Health-focused bots are intended to be trained on healthcare data, restricted from using unreliable sources, and designed to protect patient privacy and data from being sold or used for model training. Regulatory and technical safeguards such as HIPAA compliance and robust privacy settings are emphasized to improve accuracy and protect consumer information.
Read at Slate Magazine
Unable to calculate read time
Collection
[
|
...
]