Surgeons Question AI Tools as Allegations of Patient Harm Rise

AI-powered surgical tools are increasingly becoming a part of modern operating rooms, but recent scrutiny has raised concerns about their safety and efficacy. Investigations and multiple lawsuits are prompting medical experts to rethink the role of artificial intelligence in surgical procedures. While these tools are designed to assist human surgeons, there are alarming reports suggesting that they could be causing injuries to patients.

According to a report by Reuters, there are currently at least 1,357 AI-integrated medical devices authorized by the U.S. Food and Drug Administration (FDA), a number that has doubled since 2022. Among these devices is the TruDi Navigation System, manufactured by Johnson & Johnson. This tool employs a machine-learning algorithm to aid ear, nose, and throat specialists during surgeries. Other AI-assisted devices focus on enhancing vision, addressing challenges presented in traditional laparoscopic surgery. Surgeons often struggle with smoke obscuring the surgical field and two-dimensional images that complicate depth perception. AI tools aim to provide “crystal-clear views of the operative field,” enhancing surgical precision.

Despite these advancements, an alarming number of allegations have surfaced against various AI surgical tools, claiming they have actively harmed patients. Reports indicate that the FDA has received unconfirmed notifications of at least 100 malfunctions and adverse events related to the TruDi device. Many of these incidents involve the AI providing inaccurate information regarding the location of surgical instruments. In one significant case, a patient experienced cerebrospinal fluid leaking from their nose, while another incident involved a surgeon mistakenly puncturing the base of a patient’s skull.

The risks associated with AI surgical tools extend beyond isolated incidents. There are allegations that errors made by the TruDi’s AI contributed to serious complications, including strokes resulting from injuries to major arteries. In at least one case, a surgeon reportedly injured a carotid artery due to misleading information from the AI, leading to a blood clot and subsequent stroke, as detailed by Futurism.

Although the FDA’s reports regarding malfunctioning devices do not determine the underlying causes of medical mishaps, they suggest a potential pattern of risk associated with AI technology. The TruDi is not alone in facing scrutiny; other AI-assisted devices, such as the Sonio Detect, which analyzes prenatal images, have been criticized for using faulty algorithms that misidentify fetal structures. Additionally, Medtronic has faced allegations that its AI-assisted heart monitors failed to detect abnormal rhythms or pauses in patients.

According to research published in the JAMA Health Forum, at least 60 AI-assisted medical devices have been linked to 182 product recalls by the FDA. Notably, 43% of these recalls occurred within the first year after the devices received FDA approval. This raises concerns that the approval process may overlook initial performance failures of AI technologies.

Despite these challenges, there is a glimmer of hope. Experts suggest that improving premarket clinical testing requirements and postmarket surveillance measures could enhance the identification and reduction of device errors. As the medical community grapples with the implications of AI in healthcare, the focus will likely shift to ensuring patient safety while harnessing the potential benefits of technology in surgical environments.