AI predicts beer consumption from knee XR

New research warns of ‘shortcut learning’ risks as it exposes a significant flaw in using artificial intelligence (AI) for medical imaging.

In a cautionary study, US researchers suggest that AI models may produce highly accurate yet fundamentally misleading predictions.

The findings, published in Scientific Reports, highlight how deep learning algorithms can exploit unintended data patterns to draw conclusions without medical basis.

The authors write: ‘This case study shows how easily shortcut learning happens, its danger, how complex it can be and how hard it is to counter.’

The study raises serious concerns about their deployment in clinical settings.

Researchers analysed over 25,000 knee X-rays from the National Institutes of Health-funded Osteoarthritis Initiative.

They found that based solely on knee radiographs, AI models could infer seemingly unrelated traits, such as a patient’s preference for refried beans or beer consumption.

Although these predictions lacked medical relevance, they demonstrated remarkable accuracy by uncovering hidden patterns within the data.

Dr Peter Schilling, the study’s senior author and an orthopaedic surgeon at Dartmouth Hitchcock Medical Centre, explained: ‘While AI has the potential to transform medical imaging, we must be cautious. These models can see patterns humans cannot, but not all patterns they identify are meaningful or reliable. It’s crucial to recognise these risks to prevent misleading conclusions and ensure scientific integrity.’

The researchers attribute this issue to ‘shortcut learning’, in which AI algorithms rely on confounding variables – such as differences in X-ray equipment or clinical site markers – instead of meaningful medical features.

This phenomenon renders the AI’s results potentially deceptive, as it ‘cheats’ by using irrelevant but easily detectable patterns.
Brandon Hill, co-author and machine learning scientist at Dartmouth Hitchcock, said: ‘This goes beyond bias from clues of race or gender. We found the algorithm could even learn to predict the year an X-ray was taken. When you prevent it from learning one of these elements, it will instead learn another previously ignored. This danger can lead to some really dodgy claims, and researchers need to be aware of how readily this happens when using this technique.’

The findings have broader implications for medicine and underscore the need for rigorous evaluation of AI models in medical research – shortcut learning undermines the credibility of AI-driven discoveries and poses risks of erroneous diagnoses and inappropriate treatment pathways.

Hill warned: ‘The burden of proof just goes way up when it comes to using models to discover new patterns in medicine. Part of the problem is our own bias. It is incredibly easy to fall into the trap of presuming that the model “sees” the same way we do. In the end, it doesn’t.’

He likens AI to an ‘alien intelligence’.

‘You want to say the model is “cheating”, but that anthropomorphises the technology. It learned a way to solve the task given to it, but not necessarily how a person would. It doesn’t have logic or reasoning as we typically understand it.’

The study underscores that traditional preprocessing and data augmentation methods cannot eliminate shortcut learning. Even when researchers blinded the AI to certain variables, it simply adapted by exploiting other subtle patterns.

Schilling and his colleagues caution that AI’s black-box nature demands extraordinary scrutiny when used for scientific discovery.

‘Deep learning was designed for prediction, not hypothesis testing. This means that discovery through a black-box tool like convolutional neural networks demands far greater proof than simply showing a model found correlations in the sea of data within an image,’ the study concluded.

Published: 29.01.2025
surgery
connecting surgeons. shaping the future
AboutContact
Register
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram
Send this to a friend