The Health Sector Coordinating Council (HSCC), through its Cybersecurity Working Group, has released early previews of its upcoming 2026 guidance on managing artificial intelligence cybersecurity risks. Recognizing the complexity of AI, balancing its opportunities and challenges, the HSCC plans a phased rollout of resources focused on developing sound policies and best practices for responsible adoption.
The agency announced a series of one-page summaries outlining five HSCC Cybersecurity workstreams on AI, offering an early look at the best practices and white papers the Cybersecurity Working Group plans to release in 2026. The forthcoming publications will address education and enablement, cyber operations and defense, governance, secure-by-design principles, and third-party risk and supply chain transparency.
The initial foundational publication, ‘AI in Healthcare: 10 Terms You Need to Know,’ has been addressed in the current document.
Given the cybersecurity challenges and opportunities presented by AI, the HSCC CWG formed an AI Cybersecurity Task Group last October, composed of 115 healthcare organizations across the spectrum, to consider how to prepare the sector with operational and organizational guidance. The AI Task Group recognized the complexity and associated risk of AI technology used in clinical, administrative, and financial health sector applications, and accordingly divided the mix of AI issues into manageable work streams of discrete functional areas of concentration while staying mindful of the interrelationships and interdependencies among those functions.
The effort is divided into five subgroups. The Education and Enablement subgroup focuses on developing common terminology for AI cybersecurity guidance, along with education and training programs to help diverse healthcare user communities build awareness and use AI appropriately within their operational environments. It aims to improve education and awareness of AI and machine learning technologies to better understand their risks and apply appropriate control measures. Its key focus areas include developing clear definitions and terminology, strengthening understanding of AI and machine learning fundamentals, and addressing specific AI-related risks and mitigation strategies.
The subgroup’s deliverables include a list of the top ten AI definitions, AI-assisted learning materials such as videos and infographics, and recommendations for relevant education and training courses. Expected outcomes include improved awareness of AI and machine learning terminology, a deeper understanding of how AI is used in healthcare, better comprehension of the risks involved in AI deployment and operations, and broader application of appropriate control practices.
The Cyber Operations and Defense subgroup is creating practical playbooks to help healthcare organizations prepare for, detect, respond to, and recover from AI-related cyber incidents. It focuses on identifying the requirements needed to conduct optimized AI-specific cybersecurity operations in healthcare environments. Its objectives include developing effective approaches for incident response and recovery, defining AI-driven threat intelligence processes with safeguards that support both clinical workflows and broader health operations, and establishing risk factors and operational guardrails for AI technologies beyond large language models, such as predictive machine learning systems and embedded device AI.
The subgroup is also working on developing a draft structure for a series of playbooks, populating initial sections with discussion-based content, and creating recommendations for stakeholder roles and processes to refine and mature these playbooks over time.
Key areas of focus include providing a structured framework for preparing for, detecting, responding to, and recovering from AI-related cyber incidents, while maintaining best practices for business continuity and regulatory compliance. The subgroup also seeks to establish clear governance and accountability mechanisms by integrating AI-specific risk assessments into existing cybersecurity frameworks and defining tailored response procedures for threats such as model poisoning, data corruption, and adversarial attacks.
Additional priorities include enabling continuous monitoring of AI systems, ensuring rapid containment and recovery of compromised models, and maintaining secure and verifiable model backups. The work also emphasizes the importance of resilience testing, cross-functional coordination between cybersecurity and data science teams, and continuous learning from incidents to strengthen future preparedness.
The intended deliverables of the Cyber Operations and Defense subgroup include the AI Cyber Resilience and Incident Recovery Playbook, the AI-Driven Clinical Workflow Threat Intelligence Playbook, and the Cybersecurity Operations for AI Systems Playbook.
The Governance subgroup is developing a comprehensive framework for managing AI cybersecurity risks across the health sector enterprise, covering governance processes for the AI lifecycle, regulatory alignment, and AI-specific security and data management. It aims to provide a comprehensive framework for healthcare organizations of all sizes to manage the unique cybersecurity risks associated with artificial intelligence in clinical environments. Its objective is to ensure that AI is governed securely and responsibly throughout its lifecycle.
The group’s work involves establishing formal governance processes that clearly define roles, responsibilities, and clinical oversight across the AI lifecycle. It also focuses on aligning AI governance controls with applicable legal and regulatory requirements, such as HIPAA and FDA regulations. Another key priority is identifying relevant standards and implementing AI-specific security and data controls, referencing established frameworks like the NIST AI Risk Management Framework.
The framework encourages healthcare organizations to maintain a complete inventory of all AI systems to better understand their functions, data dependencies, and security implications. It also introduces a five-level autonomy scale to classify AI tools and align the degree of human oversight with system risk.
The primary deliverable is a comprehensive guide featuring an AI Governance Maturity Model, designed to help organizations assess their current capabilities, identify gaps, and prioritize areas for improvement. The outcome is to support the safe, ethical, and effective deployment of AI technologies in healthcare.
The Secure by Design Medical subgroup works to embed cybersecurity principles into the development of AI-enabled medical devices, fostering collaboration across engineering, cybersecurity, regulatory, and clinical teams. It focuses on defining and developing secure-by-design principles specifically for AI-enabled medical devices. Its objective is to provide practical guidance and tools that empower medical device manufacturers and stakeholders to embed cybersecurity throughout the entire product lifecycle.
The work addresses unique AI security risks, including emerging threats such as data poisoning, model manipulation, and drift exploitation, aligning mitigation strategies with leading regulatory frameworks such as U.S. FDA guidance, the NIST AI Risk Management Framework, and CISA recommendations. The subgroup also fosters cross-functional collaboration across engineering, cybersecurity, regulatory affairs, quality assurance, and clinical teams to ensure AI-specific risks are integrated into medical device risk management and development processes. Additionally, it advances transparency and supply chain security by supporting the integration of AI Bill of Materials (AIBOM) and Trusted AI BOM (TAIBOM), enhancing visibility, traceability, scalability, and trust across the AI supply chain.
The intended deliverables include a set of AI-specific security practices spanning the entire AI lifecycle, designed to help organizations of all sizes securely integrate AI into medical device design. Key outputs are AI Secure by Design guidance, providing actionable recommendations for embedding security from the outset; an AI Security Risk Taxonomy to identify, categorize, and assess AI-related security risks; functional role implementation briefs offering tailored guidance for stakeholders such as developers, product managers, and security teams to operationalize AI security practices; and educational and awareness materials to build organizational understanding and promote a security-first mindset in AI development.
Lastly, the Third-Party AI Risk and Supply Chain Transparency subgroup aims to strengthen security, trust, and resilience in healthcare supply chains by improving visibility into third-party AI tools, establishing governance and oversight policies, and standardizing procurement, vendor vetting, and lifecycle management. It focuses on enhancing visibility and transparency, establishing governance and oversight, managing cyber and data risks, implementing contractual and legal safeguards, overseeing lifecycle risk management, promoting shared responsibility, and encouraging ethical and responsible AI use.
Its key activities include identifying, tracking, and monitoring third-party AI tools and supply chains, defining accountable policies, governance boards, and approval pathways, and evaluating third-party AI systems for security, privacy, and bias risks in alignment with frameworks such as the NIST AI Risk Management Framework, HICP, and HIPAA. The subgroup also standardizes procurement, vendor vetting, monitoring, and end-of-life planning, provides model contract and business associate agreement clauses for data use, PHI handling, and breach reporting, fosters collaboration between healthcare organizations and AI vendors, and encourages bias testing, fairness, and human oversight in AI adoption.
The intended deliverables and outcomes include guidance that reduces systemic exposure to hidden AI risks across layered vendor supply chains, provides scalable tools and best practices for governance, risk, and compliance teams, elevates patient safety, data privacy, and trust through accountability and transparency, and ensures alignment with evolving regulatory requirements and global AI standards such as NIST, FDA, ISO, and IMDRF.
The HSCC CWG calls upon healthcare organizations to adopt these best practices, share guidance across teams, and engage with the council to shape the future of sector-wide AI governance and cybersecurity. The goal is to ensure that healthcare innovation is matched by a steadfast commitment to patient safety, data privacy, and operational resilience.
Looking ahead, these workstreams have made substantial progress over the past several months and, beginning in January, will separately publish their guidance documents in succession through the first quarter of next year.
link
