
"The use of artificial intelligence in healthcare could create a legally complex blame game when it comes to establishing liability for medical failings, experts have warned. The development of AI for clinical use has boomed, with researchers creating a host of tools, from algorithms to help interpret scans to systems that can aid with diagnoses. AI is also being developed to help manage hospitals, from optimising bed capacity to tackling supply chains."
"But while experts say the technology could bring myriad benefits for healthcare, they say there is also cause for concern, from a lack of testing of the effectiveness of AI tools to questions over who is responsible should a patient have a negative outcome. Prof Derek Angus, of the University of Pittsburgh, said: There's definitely going to be instances where there's the perception that something went wrong and people will look around to blame someone. The Jama summit on Artificial Intelligence, hosted last year by the Journal of the American Medical Association, brought together a panoply of experts including clinicians, technology companies, regulatory bodies, insurers, ethicists, lawyers and economists."
"The resulting report, of which Angus is first author, not only looks at the nature of AI tools and the areas of healthcare where they are being used, but also examines the challenges they present, including legal concerns. Prof Glenn Cohen from Harvard law school, a co-author of the report, said patients could face difficulties showing fault in the use or design of an artificial intelligence product. There could be barriers to gaining information about its inner workings, while it could also be challenging to propose a reasonable alternative design for the product or prove a poor outcome was caused by the AI system."
Adoption of AI in clinical settings has expanded rapidly, producing tools for image interpretation, diagnostic support, and hospital management functions such as bed capacity and supply chains. The technology offers potential clinical and operational benefits alongside concerns about insufficient testing of effectiveness and unclear responsibility when patients experience negative outcomes. Legal liability can become complex because demonstrating fault may be difficult, algorithmic inner workings can be opaque, proposing reasonable alternative designs may be challenging, and multiple parties involved in development and deployment may shift blame or rely on contractual allocations of responsibility.
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]