Gap between AI law and patient reality must be addressed

The gap between AI law and patient real-world experience needs to be bridged.

This is according to an article in the Journal of Medical Internet Research.

Author Anshu Ankolekar points out that although the European Union’s AI Act establishes a legal basis for transparency, the technical and clinical aspects of meaningful explanations remain largely undefined.

The report emphasises that as high-risk AI systems become common in medical imaging and diagnostics, patients are increasingly entitled to ask: ‘Why did the computer conclude this?’

However, the opaqueness of advanced algorithms often prevents even the most experienced clinicians from providing an answer that is both technically accurate and practically useful for the patient.

The analysis highlights several major obstacles that hinder current legal frameworks, like the EU AI Act and GDPR, from improving patient care.

The most accurate AI models often operate with millions of parameters that are impossible for humans to fully trace. Forcing simpler, more explainable models could potentially sacrifice diagnostic accuracy, creating a direct conflict with patient safety.

Research indicates that incorrect AI suggestions can lead clinicians to an incorrect diagnosis, regardless of experience level. An explanation delivered by a clinician who has already deferred to an algorithm may not reflect an independent clinical assessment.

Between 22-58% of EU citizens report difficulty understanding health information. Providing technical detail on algorithmic logic often leads to cognitive overload rather than informed consent.

The article argues for moving away from a check-the-box compliance approach toward one focused on decision-relevant clarity.

Experts suggest that a truly useful patient-facing explanation must address what the system recommends, how confident it is, and what the known performance gaps are for specific populations.

To bridge this gap, the author calls for co-design partnerships, institutional support, and standards for comprehension.

She said: ‘The EU AI Act gives patients something they did not previously have: a legal basis for demanding transparency about AI systems influencing their care. But the right to explanation and the capacity to deliver one that patients can genuinely use are shaped by forces the law alone cannot govern: the opacity of high-performing models, the pressures of clinical practice, and the diversity of patient needs and literacy levels.

‘What the AI Act’s transparency requirements provide, beyond legal protection, is a shared standard to work toward. The right to explanation is an important starting point. What patients need now are answers they can use.’

Published: 30.04.2026
surgery
connecting surgeons. shaping the future
AboutContact
Register
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram
Send this to a friend