A new wave of regulations governing the use of artificial intelligence (AI) in healthcare has taken effect as the new year unfolds, igniting discussions on the appropriate application and oversight of this rapidly advancing technology. With the healthcare sector emerging as one of the most promising arenas for AI, the implications for both patients and providers are significant.
AI tools are increasingly utilized for quick access to medical information and decision-making capabilities, appealing to many patients frustrated with the traditional U.S. healthcare system. A recent Gallup poll indicates that 70% of Americans perceive the healthcare system as facing major issues or in crisis. “Medicine has been the top of my mind, and AI too, as the top of medicine’s mind, because I’ve been sick. And AI has given me more answers than anything because I’ve had to wait three months to see doctors,” said Kate Large, a patient who frequently uses AI for health research.
This experience is echoed by many; according to OpenAI, over 40 million users query ChatGPT daily for healthcare-related advice. The company recently introduced ChatGPT Health, a feature aimed explicitly at handling these inquiries. “I feel like it’s not very different from before AI, when people would be doing their own online searches anyway,” noted Dr. Lailey Oliva, an internal medicine physician at Sutter Health.
However, AI’s presentation of information often carries a deceptive aura of authority and empathy, leading patients to trust its outputs more than they might with traditional sources of medical advice. “There’s a big anthropomorphization of these language models,” explained Nitya Thakkar, a third-year Ph.D. student at Stanford studying AI applications in healthcare. “When the AI speaks to you with empathy and uses ‘I’ statements, you start to interface with it like it’s a real doctor.” This dynamic raises critical concerns over the potential consequences of misplaced trust.
Recognizing these risks, California and other states have enacted legislation aimed at enhancing transparency and accountability in AI applications. Assembly Bill (AB) 489, for instance, prohibits developers from suggesting that their AI systems offer professional medical advice, restricting the use of terms like “doctor” or “M.D.” that could mislead users regarding the qualifications of the technology. “It’s a good way to remind people that these are models; they’re all working on your computer, they’re not real people, and they’re not doctors,” Thakkar emphasized.
The debate surrounding AI’s role in healthcare parallels ongoing discussions about professional titles among medical practitioners. Large recounted her confusion during a virtual visit when she encountered a nurse with a Ph.D. who was presented as “Dr.” This incident highlights a broader issue in healthcare, where clarity about titles can shape patient trust and perceptions of authority. In California, courts have ruled that the use of “doctor” by non-M.D.s can constitute misleading commercial speech, further complicating the conversation around AI.
In addition to regulating how AI represents itself, California legislators have also mandated that developers disclose the data used to train their AI systems. This requirement aims to bolster accountability for AI’s clinical evaluations and recommendations. Michelle Mello, a professor at Stanford focused on responsible AI use in healthcare, noted, “It’s one thing to have AI give you bad investment advice, or you don’t get hired because of an AI hiring system, and that’s unfair, but we’re talking about uses of AI that could kill you.”
In light of concerns raised by anecdotal evidence regarding the harmful impacts of AI on vulnerable populations, state-level efforts are being complemented by a federal pushback. President Donald Trump’s administration criticized stringent state regulations, arguing they could pose challenges for AI developers facing a fragmented regulatory environment. “AI developers really hate the idea of state regulation of their products because it subjects them to potentially 50 different regulatory regimes,” Mello added.
On January 6, the Food and Drug Administration (FDA) announced it would also ease oversight of digital health products, seeking to keep pace with Silicon Valley’s rapid innovation. However, advocates for stringent regulations argue that patient safety must remain a top priority. “As a patient and user of AI, it’s extremely important to get the facts right,” Large asserted. “For me, it won’t completely replace traditional medicine, but it’s a tool that guides me in advocating for my medical care.”
As AI continues to permeate the healthcare industry, the focus is shifting from its potential applications to how it should be regulated. This emerging landscape poses both opportunities for efficiency and critical concerns regarding accuracy and trust, necessitating proactive measures to safeguard patient welfare.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































