Connect with us

Hi, what are you looking for?

AI Regulation

AI’s Role in Medicine Faces Urgent Regulation as 1 in 3 Americans Turn to Chatbots for Care

AI-driven chatbots now aid one in three Americans in healthcare, prompting urgent regulations as New York and California propose differing oversight measures.

Last month, researchers successfully manipulated an AI-driven drug prescription service, resulting in a tripling of an opioid dose and labeling methamphetamine as safe. This alarming incident prompted New York lawmakers to introduce legislation likening clinical AI to unauthorized medical practice, potentially rendering it illegal for AI to offer even basic medical guidance. In contrast, California has adopted a more measured approach, enacting a law that mandates informing patients when AI is utilized in their care.

As states grapple with varying regulatory frameworks for AI in healthcare, millions of Americans are not waiting for consensus. Recent data reveals that one in three Americans now rely on AI chatbots for symptom diagnosis and care direction, a figure that has doubled over the past year. In essence, AI is already playing a role in clinical decision-making.

From my perspective as an emergency medicine physician across various healthcare settings, the persistent issue of unmet medical needs is striking. Patients frequently face challenges such as running out of essential medications or being unable to schedule timely appointments with specialists. For instance, a diabetic patient may go months without seeing an endocrinologist, and a urinary tract infection could escalate to a kidney infection due to delayed treatment. This situation turns emergency rooms into the default option for care that is otherwise inaccessible, resulting in significant human suffering.

Artificial intelligence has the potential to transform this dire reality. It can streamline access to care; for example, women should be able to refill birth control prescriptions without needing an in-person appointment. Similarly, patients suffering from common conditions like cold sores or yeast infections should not have to wait days for a medical callback. In many parts of the globe, such care is readily available without prescriptions, and AI could facilitate similar access for American patients, provided appropriate safety standards are implemented.

In fact, the most ambitious initiatives in this realm are progressing more rapidly than many realize. The federal government is currently soliciting private sector proposals to develop AI systems designed to independently manage heart failure events, a condition that afflicts many but sees only 1% of patients receiving the recommended treatment. Mortality rates for this condition exceed 50% over five years.

AI’s potential to significantly broaden access to medical care is promising, if not revolutionary. Most Americans are not choosing between AI and their trusted family physician; rather, they are often forced to choose between AI and no care at all. Given barriers like cost and physician shortages, patients deserve better options, and AI represents a unique opportunity to provide scalable assistance.

This is why I recently joined a company focused on using AI to democratize healthcare access. This decision was not made lightly; there are valid concerns regarding the deployment of such powerful technology among vulnerable populations without adequate safeguards. However, the approach under consideration in New York is not the solution. Physicians and policymakers must not remain passive while patients turn to AI to fill significant gaps in our healthcare system. We need regulation that is robust, enforceable, and keeps pace with the rapid advancements in technology.

The federal government has already begun to shape this evolving landscape. Earlier this year, the Food and Drug Administration (FDA) updated its software guidance, allowing AI tools to operate with reduced oversight when they assist doctors. Under this revised framework, software that enables doctors to independently evaluate AI recommendations falls outside the FDA’s medical device regulations. A relevant example would be software that alerts a physician to dangerous drug interactions before writing a prescription.

However, this exemption only applies to AI systems that involve a physician. There is no similar carve-out for AI that interacts directly with patients or makes recommendations in urgent situations. Consequently, this technology is presumed to be fully regulated, although the government has yet to provide clarity on this matter. Establishing federal regulations for rapidly evolving technologies is inherently difficult, and the FDA’s caution is understandable. Nonetheless, it leads to a paradox where clinically autonomous AI is subject to the least regulation.

In this regulatory vacuum, states have taken a variety of approaches. Some, like Utah, Arizona, and Texas, are developing frameworks to accelerate AI deployment in healthcare. Others, including New York and California, are working to limit its application. This scenario exemplifies the “laboratories of democracy” model, allowing for state-level experimentation to inform federal policy. Yet, 50 competing regulations cannot serve as a sustainable solution for such a consequential technology. Patients need fundamental protections when using clinical AI, regardless of where they reside, and companies developing these tools must adhere to uniform safety standards.

The regulatory framework we require mirrors what the FDA already implements: necessitating independent, third-party verification of safety and efficacy before deploying clinical AI systems; mandating adversarial security assessments as part of the approval process; and establishing a federal baseline that states can exceed but not fall below. Additionally, a clear path to accountability must exist when AI results in patient harm. Adaptations of long-standing medical malpractice principles could provide guidance in this area.

Many believe that regulation impedes technological progress, but historical examples suggest otherwise. Federal deposit insurance bolstered public trust in banks, while federal safety regulations made commercial aviation the safest mode of mass transit.

Clinical AI requires a similar foundation, and the urgency for action is immediate — it is already in the hands of patients and evolving faster than any technology we have attempted to govern. The individuals who stand to benefit most from AI are those who may also suffer the greatest losses if we fail to establish effective regulations.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Business

OpenAI attracts top talent from struggling software giants as stock prices plummet, signaling a drastic shift in the enterprise tech landscape.

AI Education

Online courses are empowering 200,000 Indians to transition into AI roles, breaking educational barriers and reshaping the job landscape.

Top Stories

ASML raises its 2026 sales outlook and unveils a €12 billion buyback program while partnering with Mistral AI to enhance chip manufacturing capacity.

AI Cybersecurity

Dell Technologies unveils quantum-ready security features to enhance cyber resilience, empowering organizations to recover 46% faster from incidents.

AI Finance

AI integration in finance faces urgent ethical challenges, as Bangladesh's National AI Policy aims to enhance inclusivity while addressing systemic biases in data use.

AI Regulation

Justice Department intervenes in xAI's lawsuit against Colorado's AI regulation law, arguing it may violate the Equal Protection Clause and hinder innovation.

AI Business

Google introduces the Gemini Enterprise Agent Platform, enhancing AI scalability with over 200 models and TPU 8t chips delivering 121 ExaFlops of computing power.

AI Generative

OpenAI's GPT Image 2 model transforms visual content creation, enhancing design efficiency and accessibility while reducing production timelines significantly.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.