
"When Judith Miller had routine blood work done in July, she got a phone alert the same day that her lab results were posted online. So, when her doctor messaged her the next day that overall her tests were fine, Miller wrote back to ask about the elevated carbon dioxide and something called "low anion gap" listed in the report. While the 76-year-old Milwaukee resident waited to hear back, Miller did something patients increasingly do when they can't reach their health care team. She put her test results into Claude and asked the AI assistant to evaluate the data."
"Patients have unprecedented access to their medical records, often through online patient portals such as MyChart, because federal law requires health organizations to immediately release electronic health information, such as notes on doctor visits and test results. And many patients are using large language models, or LLMs, like OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini, to interpret their records."
Patients increasingly access medical records through online portals and use large language models such as ChatGPT, Claude, and Gemini to interpret test results and notes. One older patient entered lab values into an AI assistant and felt reassured when the model did not flag alarming findings. Federal law requires health organizations to immediately release electronic health information to patients, expanding direct access. AI help can reduce anxiety and improve understanding, but chatbots can produce incorrect answers and may not keep sensitive medical data private. User confidence is low: 56% of AI users lack confidence in chatbot accuracy per a 2024 KFF poll. LLM outputs vary greatly based on prompts.
Read at www.npr.org
Unable to calculate read time
Collection
[
|
...
]