The Reason Foundation submitted a public comment letter to the U.S. Department of Health and Human Services (HHS) on February 23, 2026, addressing the significant barriers to the adoption of artificial intelligence (AI) in clinical care. In response to an information request on accelerating AI’s integration into healthcare, the non-profit organization highlighted regulatory uncertainty as a primary obstacle. This uncertainty complicates the distinction between regulated medical devices and unregulated Clinical Decision Support (CDS) software, leading developers to limit the functionality of their tools to avoid stringent regulations. The foundation underscores that this cautious approach diminishes the potential of AI to enhance patient outcomes.
Regulatory frameworks surrounding CDS software, designed to assist clinicians by providing alerts and risk assessments based on patient data, have been ambiguous and inconsistent. The FDA’s evolving guidance has redefined the boundary between what constitutes a medical device and what remains a supportive tool, causing developers to strip features that could significantly improve clinical decision-making. For instance, a CDS tool intended to detect early signs of sepsis may offer multiple treatment options without ranking them, even when some are far more critical than others. This cautious design leads to less actionable insights, thereby undermining the very purpose of AI technology in healthcare.
The Reason Foundation’s letter references the 21st Century Cures Act, which aimed to facilitate the development of non-device CDS tools that enhance rather than replace professional judgment. However, the FDA’s recent interpretations have tightened this boundary, creating an unpredictable regulatory environment. The foundation points out that the FDA’s shifting positions from 2022 to early 2026 have left developers grappling with increased scrutiny and a lack of clarity, which could stifle innovation. The former FDA Commissioner Scott Gottlieb and Senator Bill Cassidy have publicly questioned these interpretations, advocating for evidence-based justifications that have yet to be provided by the FDA.
Despite the FDA’s January 2026 guidance, which attempts to clarify certain distinctions, barriers remain. The regulations still rely heavily on the agency’s subjective interpretation, leaving developers uncertain about compliance. This ambiguity disproportionately affects smaller firms that often lack the resources to navigate the regulatory landscape effectively. As the FDA continues to tighten the reins on the types of features that qualify as non-device CDS, many crucial AI applications, such as those for early deterioration detection, run the risk of being classified as medical devices, further deterring investment.
In addition to regulatory hurdles, the Reason Foundation highlights an accountability gap in the healthcare ecosystem that complicates the implementation of AI solutions. Questions remain about responsibilities for clinician training, incident reporting, and ongoing tool validation. This vacuum hampers deployment as hospitals may hesitate to adopt innovative AI tools, even when evidence suggests substantial improvements in clinical efficiency and accuracy. The foundation advocates for HHS to establish a clear framework that delineates these responsibilities, potentially improving the landscape for AI integration.
The foundation urges HHS to direct the FDA to codify a comprehensive safe harbor that aligns with the Cures Act’s definitions to clarify the regulatory path for AI in healthcare. It also recommends revising existing CMS regulations to create a voluntary framework that clarifies responsibilities among developers, hospitals, and clinicians. Such steps aim to foster a more predictable and competitive environment that prioritizes patient-centered solutions over regulatory caution.
As the healthcare sector grapples with these challenges, the Reason Foundation’s comments underscore the critical need for regulatory reform to unlock the full potential of AI technologies. Their proposals aim not only to facilitate innovation but also to ensure that AI tools can be effectively integrated into clinical workflows, ultimately improving patient care and outcomes. The evolution of AI in healthcare will depend significantly on the government’s ability to navigate these complex regulatory landscapes while promoting a culture of innovation.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health
















































