Last month, researchers successfully manipulated an AI-driven drug prescription service, resulting in a tripling of an opioid dose and labeling methamphetamine as safe. This alarming incident prompted New York lawmakers to introduce legislation likening clinical AI to unauthorized medical practice, potentially rendering it illegal for AI to offer even basic medical guidance. In contrast, California has adopted a more measured approach, enacting a law that mandates informing patients when AI is utilized in their care.
As states grapple with varying regulatory frameworks for AI in healthcare, millions of Americans are not waiting for consensus. Recent data reveals that one in three Americans now rely on AI chatbots for symptom diagnosis and care direction, a figure that has doubled over the past year. In essence, AI is already playing a role in clinical decision-making.
From my perspective as an emergency medicine physician across various healthcare settings, the persistent issue of unmet medical needs is striking. Patients frequently face challenges such as running out of essential medications or being unable to schedule timely appointments with specialists. For instance, a diabetic patient may go months without seeing an endocrinologist, and a urinary tract infection could escalate to a kidney infection due to delayed treatment. This situation turns emergency rooms into the default option for care that is otherwise inaccessible, resulting in significant human suffering.
Artificial intelligence has the potential to transform this dire reality. It can streamline access to care; for example, women should be able to refill birth control prescriptions without needing an in-person appointment. Similarly, patients suffering from common conditions like cold sores or yeast infections should not have to wait days for a medical callback. In many parts of the globe, such care is readily available without prescriptions, and AI could facilitate similar access for American patients, provided appropriate safety standards are implemented.
In fact, the most ambitious initiatives in this realm are progressing more rapidly than many realize. The federal government is currently soliciting private sector proposals to develop AI systems designed to independently manage heart failure events, a condition that afflicts many but sees only 1% of patients receiving the recommended treatment. Mortality rates for this condition exceed 50% over five years.
AI’s potential to significantly broaden access to medical care is promising, if not revolutionary. Most Americans are not choosing between AI and their trusted family physician; rather, they are often forced to choose between AI and no care at all. Given barriers like cost and physician shortages, patients deserve better options, and AI represents a unique opportunity to provide scalable assistance.
This is why I recently joined a company focused on using AI to democratize healthcare access. This decision was not made lightly; there are valid concerns regarding the deployment of such powerful technology among vulnerable populations without adequate safeguards. However, the approach under consideration in New York is not the solution. Physicians and policymakers must not remain passive while patients turn to AI to fill significant gaps in our healthcare system. We need regulation that is robust, enforceable, and keeps pace with the rapid advancements in technology.
The federal government has already begun to shape this evolving landscape. Earlier this year, the Food and Drug Administration (FDA) updated its software guidance, allowing AI tools to operate with reduced oversight when they assist doctors. Under this revised framework, software that enables doctors to independently evaluate AI recommendations falls outside the FDA’s medical device regulations. A relevant example would be software that alerts a physician to dangerous drug interactions before writing a prescription.
However, this exemption only applies to AI systems that involve a physician. There is no similar carve-out for AI that interacts directly with patients or makes recommendations in urgent situations. Consequently, this technology is presumed to be fully regulated, although the government has yet to provide clarity on this matter. Establishing federal regulations for rapidly evolving technologies is inherently difficult, and the FDA’s caution is understandable. Nonetheless, it leads to a paradox where clinically autonomous AI is subject to the least regulation.
In this regulatory vacuum, states have taken a variety of approaches. Some, like Utah, Arizona, and Texas, are developing frameworks to accelerate AI deployment in healthcare. Others, including New York and California, are working to limit its application. This scenario exemplifies the “laboratories of democracy” model, allowing for state-level experimentation to inform federal policy. Yet, 50 competing regulations cannot serve as a sustainable solution for such a consequential technology. Patients need fundamental protections when using clinical AI, regardless of where they reside, and companies developing these tools must adhere to uniform safety standards.
The regulatory framework we require mirrors what the FDA already implements: necessitating independent, third-party verification of safety and efficacy before deploying clinical AI systems; mandating adversarial security assessments as part of the approval process; and establishing a federal baseline that states can exceed but not fall below. Additionally, a clear path to accountability must exist when AI results in patient harm. Adaptations of long-standing medical malpractice principles could provide guidance in this area.
Many believe that regulation impedes technological progress, but historical examples suggest otherwise. Federal deposit insurance bolstered public trust in banks, while federal safety regulations made commercial aviation the safest mode of mass transit.
Clinical AI requires a similar foundation, and the urgency for action is immediate — it is already in the hands of patients and evolving faster than any technology we have attempted to govern. The individuals who stand to benefit most from AI are those who may also suffer the greatest losses if we fail to establish effective regulations.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health

















































