A recent call for reform in medical artificial intelligence (AI) oversight has emerged from W. Nicholson Price II, a scholar at the University of Michigan Law School. Price emphasizes the need for policymakers to reconsider the heavy reliance on clinicians for overseeing AI in health care, particularly given the rapid adoption of these technologies. According to a recent study by the American Medical Association, two-thirds of health care professionals have begun incorporating AI into their work, although many express concerns over the adequacy of current oversight mechanisms.
Price argues that the existing framework, which depends on a “human in the loop” model, places an unrealistic burden on individual clinicians. This approach requires healthcare professionals to review and validate each AI recommendation, a task that many feel unprepared for due to a lack of time and expertise. He points out that this could lead to significant errors in patient care, especially when clinicians face overwhelming workloads.
Highlighting the design flaws in medical AI, Price cites a study where an algorithm designed to detect pneumonia from X-rays excelled at the hospital where it was developed but failed in other environments. This discrepancy arose because the algorithm learned features specific to the original hospital’s physical setting rather than the symptoms of patients. Such issues are compounded by biases in the data used to train these systems. Price notes that fewer than five percent of AI systems approved by the Food and Drug Administration (FDA) between 2012 and 2020 disclosed racial demographics in their datasets, raising concerns over unrepresentative data negatively affecting minority groups.
Price outlines a dual-layer approach to governance in medical AI: central and local. At the central level, federal agencies and health organizations create nationwide frameworks to monitor and assess AI systems. The FDA mandates that many AI products meet specific safety and efficacy standards prior to market entry. However, the agency’s guidance often requires clinicians to evaluate AI recommendations based on “patient-specific information,” thereby shifting much of the responsibility for oversight onto individual health care providers.
Local governance, wherein hospitals test AI systems in their clinical environments, introduces further challenges. Many smaller or underfunded hospitals lack the necessary expertise and resources, often relying on clinicians for oversight, which may not be feasible under current conditions. This shift in responsibility leaves clinicians as the last line of defense against potential algorithmic errors.
Research shows that clinicians often struggle to identify flawed or biased models, even with available explanatory tools. Price attributes this difficulty to gaps in knowledge surrounding AI principles, as many clinicians are not adequately trained to evaluate the complexities of these technologies. He acknowledges that while educational reforms in medical schools may improve familiarity with AI over time, the tendency to trust AI recommendations—known as automation bias—will likely persist, further complicating oversight in high-pressure scenarios.
Workload constraints exacerbate these issues, particularly in less-funded health care settings, where the fast-paced nature of modern medicine leaves little room for thorough oversight. To address these challenges, Price offers a short-term solution: if clinicians are to remain involved in AI oversight, regulatory bodies and hospitals must clearly define their roles to avoid compromising their other responsibilities. He advocates for institutional support in the form of onboarding, training, and ongoing monitoring to enhance the safe and effective use of AI technology.
Looking Ahead
Price also envisions a long-term solution where medical AI could function independently, reducing the need for constant clinician oversight. He urges regulators and medical organizations to assess AI systems as independent tools that can function effectively even in low-expertise environments, thereby democratizing access to medical expertise and expanding care availability.
To facilitate this transition, Price recommends that the FDA and organizations like the Coalition for Health AI adopt evaluation methods that assess AI performance with minimal clinician oversight. He suggests that AI developers should evaluate the practical applications of their systems as part of the approval process. Once these AI products are in use, periodic “spot-checking” or random audits could ensure that the systems operate as intended.
Ultimately, Price warns that for medical AI to genuinely improve health outcomes, oversight frameworks must recognize and account for the limitations of clinical practice rather than relying on an assumption of flawless performance by health care providers.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health

















































