AI helps surgeons navigate robotic breast cancer

A multidisciplinary Korean research team has created an AI model aimed at assisting surgeons in performing robot-assisted nipple-sparing mastectomy (RNSM). The model provides expert-level guidance on skin flap dissection planes in real time.

The study, led by Professor Lee Jee-a of Uijeongbu Eulji Medical Centre and supervised by Professor Park Hyung-seok of Yonsei University College of Medicine, marks the first known clinical use of deep learning as a real-time intraoperative navigation tool in robotic breast surgery.

The findings were published in Breast Cancer Research.

Researchers said: ‘Because of the lack of haptic function of the robotic surgical system, a surgical guide for the dissection of the skin flap could improve the postoperative complications and local recurrence rates of breast cancer.’

Robot-assisted mastectomy, often using robotic arms as small as 8mm, enables more precise incisions and enhanced cosmetic results compared to open procedures.

However, limited tactile feedback and restricted visualisation pose significant challenges, especially for trainees and less-experienced surgeons.

Lee’s team trained a convolutional neural network (CNN) to tackle these issues utilising 8,834 annotated video frames extracted from 10 RNSM procedures performed with the da Vinci Xi system between 2016 and 2020.

The deep learning model utilises object detection algorithms, including mEfficientDet, YOLO v5, and RetinaNet, to identify the optimal dissection boundaries.

Model performance was assessed using the Dice similarity coefficient (DSC) and the Hausdorff distance (HD).

The AI achieved a DSC of up to 0.828 and a median HD of under 10 mm, demonstrating high spatial accuracy compared to surgeon-drawn references. There were no intraoperative complications or local recurrences in the training cohort.

Professor Park said: ‘Because breast surgery often impacts a woman’s self-esteem, demand for minimally invasive robotic procedures will continue to grow. We developed this technology to help young surgeons train more effectively.’

The system is designed to operate during the console phase of robotic mastectomy, where the lack of tactile input makes estimating skin flap thickness especially challenging.

Considering the clinical implications of inadequate flap dissection – including necrosis and leftover breast tissue – the AI tool could provide significant value in both surgical guidance and training.

Real-time video integration highlights the practicality of intraoperative AI support. Importantly, this marks the first documented case of DL-based visual guidance in robotic mastectomy.

The authors emphasised the potential impact on surgical education, especially in robotic and endoscopic breast surgery training, where skill acquisition can be prohibitively steep.

They write: ‘The surgical guide can provide consistent and accurate training for RNSM and endoscopic surgery. It will be possible to apply the results of this study to the education and clinical practice of endoscopic breast surgery, which has been difficult for many surgeons to access easily due to the difficulty in achieving proficiency.’

The study acknowledged limitations related to dataset diversity.

The AI model was developed from surgeries performed by a single operator and labelled by only two experts, which may constrain its broader applicability. Nevertheless, the team is expanding its dataset via a multicentre prospective cohort, incorporating data from additional expert surgeons.

From a technical standpoint, future iterations will incorporate advanced object detection networks like CenterNet, YOLOv7, and Cascade R-CNN, along with plans for external validation through randomised controlled trials.

Published: 29.04.2025
surgery
connecting surgeons. shaping the future
AboutContact
Register
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram
Send this to a friend