AI comes with significant ethical risks in paediatric surgery

Applying artificial intelligence (AI) to paediatric surgery comes with ’uniquely high’ ethical stakes.

This is according to a new perspective article published in the World Journal of Pediatric Surgery and authored by the Division of Paediatric Surgery at Johns Hopkins All Children’s Hospital, which examined the ethical challenges of AI in paediatric surgical care.

Among them are small sample sizes, developmental variability and underrepresentation in large datasets, which increase the risk of bias and inaccurate predictions.

Concerns about privacy, cybersecurity, and the opaque ‘black box’ nature of deep learning systems further hinder clinical adoption.

It concluded that the core principles of medical ethics must guide the responsible integration of AI into paediatric surgery, emphasising the importance of human dignity, transparency, accountability, fairness, and the preservation of trust in an era in which algorithms increasingly influence critical clinical decisions.

Given the challenges, comprehensive research is urgently required to establish robust ethical and governance frameworks for paediatric surgical AI.

The article evaluated applications ranging from AI-assisted informed consent tools to varying levels of autonomy in surgical robotics.

The authors proposed that technological advancement should be in line with established ethical standards to ensure that patient safety, transparency, and human-centred care remain the core principles of innovation.

The article structures its analysis around four foundational principles of medical ethics: autonomy, beneficence and non-maleficence, and justice.

Autonomy
Families must be informed when AI aids in diagnosis, risk assessment, or planning. AI tools can clarify medical terms during consent, improving understanding. But, they should support, not replace, direct surgeon-family communication.

Beneficence and non-maleficence
AI should improve outcomes without harm. For example, intraoperative diagnostics boost efficiency and reduce surgery time. But over-reliance without clinical oversight risks misdiagnosis or errors. When AI fails, accountability is crucial for clinicians, institutions, and developers.

Justice
Bias in paediatric datasets can worsen health inequalities. Authors highlight cybersecurity risks, the digital divide, and the need for explainable AI to maintain trust. AI should be 'augmented intelligence,’ not replace clinical judgment. Human oversight remains essential in paediatric surgery.

Without ethics, advanced tech may erode trust between healthcare teams and families, they said.

Published: 28.04.2026
surgery
connecting surgeons. shaping the future
AboutContact
Register
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram
Send this to a friend