Surgical robots now learn by watching surgeons

3 - 4

An ‘almost magical’ innovation has nudged surgical robotics closer to autonomy.

Hailed a ‘significant leap’, a robot has successfully performed surgical tasks by learning from videos of human surgeons and mirroring their skills precisely.

In collaboration with Stanford University, Johns Hopkins University researchers demonstrated the success of imitation learning for surgical robots.

For the first time, the robot executed three critical tasks – needle manipulation, tissue lifting and suturing – on par with seasoned surgeons, seen here.

These findings were unveiled at the Conference on Robot Learning in Munich.

Axel Krieger, assistant professor of mechanical engineering at Johns Hopkins, said: ‘This is a significant leap toward autonomous surgical robots. The model processes camera input and predicts the robotic actions required for surgery. It’s almost magical.’

The widely used da Vinci Surgical system served as the training ground.

Researchers leveraged hundreds of wrist-camera videos from the robots used in surgeries around the globe.

These videos, recorded by surgeons worldwide, are used for post-operative analysis and then archived.

With 7,000 such robots and over 50,000 trained surgeons worldwide, this archive enabled a treasure trove of training data.

While the da Vinci system is widely used, researchers say it’s notoriously imprecise. However, the team found a way to make the flawed input work.

The key was training the model to perform relative movements rather than absolute actions, which are inaccurate.

Unlike conventional methods, which require precise coding of robotic movements, the team’s machine learning model combined imitation learning with the same machine learning architecture that underpins ChatGPT.

However, where ChatGPT works with words and text, this model uses kinematics, to break down the angles of robotic motion into maths.

Crucially, the robot learned relative motions to overcome the imprecision of da Vinci’s absolute movements.

Lead author Ji Woong ‘Brian’ Kim, a postdoctoral researcher at Johns Hopkins, said: ‘All we need is image input, and then this AI system finds the right action. We find that even with a few hundred demos, the model can learn the procedure and generalise new environments it hasn’t encountered.’
Incredibly, the system displayed adaptability beyond its training. If it dropped a needle, the robot could retrieve it autonomously without being explicitly programmed for such scenarios.

Previously, developing robotic procedures required extensive manual coding, often spanning years.

Imitation learning accelerates this process to days, enabling robots to learn diverse surgeries and reducing medical errors.

The researchers aim to extend this approach to complete surgeries autonomously.

The study’s co-authors include Johns Hopkins researchers Samuel Schmidgall, Anton Deguet and Marin Kobilarov, Stanford’s Tony Z Zhao and Chelsea Finn.

Axel Krieger added: ‘What is new here is we only have to collect imitation learning of different procedures, and we can train a robot to learn it in a couple of days. It allows us to accelerate to the goal of autonomy while reducing medical errors and achieving more accurate surgery.’

You can read more here.

Published: 11.12.2024
surgery
connecting surgeons. shaping the future
AboutContact
Register
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram
Send this to a friend