Researchers have unveiled a ground-breaking application of machine learning set to transform pelvic fracture surgery.
The innovation promises to enhance the treatment of injuries frequently incurred in car accidents.
Thanks to a collaborative effort between the Johns Hopkins University’s Whiting School of Engineering and the School of Medicine in the US, a cutting-edge approach called Pelphix has been introduced that utilises surgical phase recognition (SPR) powered by machine learning.
SPR identifies distinct stages within a surgical procedure, facilitating insights into workflow efficiency, surgical team proficiency, error rates and more.
Presented at the 26th International Conference on Medical Image Computing and Computer-Assisted Intervention in Vancouver, Pelphix aims to optimise pelvic fracture surgeries.
Benjamin Killeen, a member of the research team from the Department of Computer Science, highlighted the potential for reducing radiation exposure and shortening procedure durations via this novel approach.
The novelty of Pelphix, he says, lies in its integration of SPR into X-ray-guided procedures.
Typically, SPR analyses full-colour endoscopic videos, neglecting X-ray imaging despite its prevalence in various procedures such as orthopaedic surgery, interventional radiology and angiology.
The team pioneered their training dataset to bridge this gap, using synthetic data and deep neural networks to simulate surgical workflows and X-ray sequences.
Killeen emphasised their simulation’s fidelity to actual surgical scenarios, enabling the training of a machine learning-powered SPR algorithm specifically for X-ray sequences.
He said: ‘We simulated not only the visual appearance of images but also the dynamics of surgical workflows in X-ray to provide a viable alternative to real image sources – and then we set out to show that this approach transfers to the real world.’
Validated through cadaver experiments, Pelphix demonstrated its viability for real-world application, paving the way for future algorithms to leverage these simulations for pretraining.
Moving forward, the team is gathering patient data for extensive validation. Killeen outlined plans.
Killen added: ‘The next step in this research is to refine the workflow structure based on our initial results and deploy more advanced algorithms on large-scale datasets of X-ray images collected from patient procedures. In the long term, this work is a first step toward obtaining insights into the science of orthopaedic surgery from a big data perspective.’
The collaborative effort includes key figures such as Russell Taylor, Greg Osgood, Mehran Armand, Jan Mangulabnan and Han Zhang from various departments and labs within Johns Hopkins University.
This breakthrough promises to revolutionise surgical data science, emphasising the routine collection and interpretation of X-ray data.


