The misuse of AI-powered chatbots in clinical settings has been ranked the most significant health technology hazard of 2026, according to an annual report by ECRI, the Pennsylvania-based nonprofit patient safety organization.
Tools like ChatGPT, Claude, and other large language model-based applications are increasingly used by healthcare professionals, staff, and patients alike. But ECRI warns that these systems are not validated for medical use and can produce misleading information that may contribute to patient harm.
“While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals,” said Dr. Marcus Schabacker, ECRI’s president and CEO. “Realizing AI’s promise while protecting people requires disciplined oversight, detailed guidelines, and a clear-eyed understanding of AI’s limitations.”
Together, we keep your devices optimized for care
Your team plays a critical role to ensure biomedical devices are ready to go. GE HealthCare’s comprehensive suite of services can help keep your equipment at peak performance and completely ready for care.
ECRI notes that although chatbots are often used for convenience, they are not designed to reason through medical context and can confidently generate incorrect or fabricated answers. In one test, a chatbot provided unsafe advice about the placement of an electrosurgical return electrode — an error that could result in serious burns if followed.
The report also highlights concerns about how chatbots may perpetuate bias and worsen health disparities, depending on the data used in their training. “If healthcare stakeholders are not careful, AI could further entrench the disparities that many have worked for decades to eliminate from health systems,” Schabacker said.
The Top 10 Health Technology Hazards for 2026 in ranked order are:
– Misuse of AI chatbots in healthcare
– Unpreparedness for a “digital darkness” event, or a sudden loss of access to electronic systems and patient information
– Substandard and falsified medical products
– Recall communication failures for home diabetes management technologies
– Misconnections of syringes or tubing to patient lines, particularly amid slow ENFit and NRFit adoption
– Underutilizing medication safety technologies in perioperative settings
– Inadequate device cleaning instructions
– Cybersecurity risks from legacy medical devices
– Health technology implementations that prompt unsafe clinical workflows
– Poor water quality during instrument sterilization
ECRI encourages health systems to adopt AI oversight policies, train clinicians in responsible AI use, and regularly review chatbot performance. It also recommends that patients and providers verify chatbot responses with trusted medical sources.
Back to HCB News
link
