
"Zack said Open Evidence, which is used by 400,000 doctors in the US to summarize patient histories and retrieve information, trained its models on medical journals, the US Food and Drug Administration's labels, health guidelines and expert reviews. Every AI output is also backed up with a citation to a source. Earlier this year, researchers at University College London and King's College London partnered with the UK's NHS to build a generative AI model, called Foresight."
"The model was trained on anonymized patient data from 57 million people on medical events such as hospital admissions and Covid-19 vaccinations. Foresight was designed to predict probable health outcomes, such as hospitalization or heart attacks. 'Working with national-scale data allows us to represent the full kind of kaleidoscopic state of England in terms of demographics and diseases,' said Chris Tomlinson, honorary senior research fellow at UCL, who is the lead researcher of the Foresight team."
Google said it took model bias extremely seriously and was developing privacy techniques to sanitise sensitive datasets and create safeguards against bias and discrimination. Researchers suggested reducing medical bias by identifying datasets that should not be used for training and by training on diverse, more representative health datasets. Open Evidence, used by 400,000 US doctors, trained models on medical journals, FDA labels, health guidelines and expert reviews, with every AI output backed by a citation. UCL and King's College London partnered with the NHS to build Foresight, trained on anonymized records from 57 million people to predict probable outcomes. The NHS Foresight project was paused for a data protection complaint review.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]