Members of the public, healthcare professionals, and industry stakeholders are invited to contribute their perspectives on how artificial intelligence (AI) in healthcare should be regulated, following the “Call for Evidence” launched by the Medicines and Healthcare products Regulatory Agency (MHRA) on December 18, 2025. This initiative aims to gather insights to aid the newly established National Commission into the Regulation of AI in Healthcare, which comprises global AI experts, clinicians, regulators, and patient advocates. The Commission will provide recommendations to the MHRA regarding the future of health AI governance.
The MHRA emphasizes the importance of this call for evidence as a way for various voices—including patients, healthcare practitioners, and innovators—to participate in shaping new standards and safeguards for AI use in medical settings. The insights collected will assist the agency in regulating emerging AI technologies within the National Health Service (NHS) and across broader healthcare contexts, ensuring these technologies foster innovation while meeting patient and familial needs.
The input sought is broad, inviting contributions from all individuals, regardless of their familiarity with AI applications in healthcare. Key discussion points include evaluating whether existing regulations adequately address the fast-paced evolution of AI technologies, ensuring patient safety as these systems advance, and clarifying the distribution of responsibilities among regulators, companies, and healthcare organizations.
Lawrence Tallon, Chief Executive of the MHRA, who has championed the formation of the Commission, stated, “AI is already revolutionising our lives, both its possibilities and its capabilities are ever-expanding, and as we continue into this new world, we must ensure that its use in healthcare is safe, risk-proportionate and engenders public trust and confidence.” He noted that the Commission brings together a diverse group of stakeholders, including patient groups, clinicians, and members from government sectors, and called for public participation in shaping a safe and advanced AI-driven healthcare system.
Professor Alastair Denniston, head of the UK’s Centre of Excellence in Regulatory Science in AI and Digital Health (CERSI-AI) and chair of the Commission, highlighted the potential benefits of AI health technologies, stating, “We are starting to see how AI health technologies could benefit patients, the wider NHS and the country as a whole.” He further emphasized the need to rethink regulatory safeguards, pointing out that effective regulation must consider not just the technology itself but its practical application within the complexities of the NHS.
The role of patient perspectives is underscored by Professor Henrietta Hughes, Patient Safety Commissioner for England and deputy chair of the Commission. She expressed that “Patients bear the direct consequences of AI healthcare decisions, from diagnostic accuracy to privacy and treatment access.” Hughes urged the public to share their experiences and concerns, asserting that their input is crucial in identifying risks and opportunities that may not be apparent to technologists and clinicians. “Your views matter and each of us has the opportunity to shape the role AI will play in our lifetime, and for the generations to come,” she added.
The call for evidence will remain open from December 18, 2025, to February 2, 2026, allowing a wide range of participants—including the public, patients, medical professionals, technology companies, and healthcare providers—to submit their insights. The findings will inform the Commission’s work and its recommendations to the MHRA for 2026, aiming to ensure that AI technologies are safe, effective, and supportive of innovations that benefit patients and the NHS.
This initiative comes at a time when public sentiment is increasingly favorable toward AI in healthcare. A survey conducted by the Health Foundation in 2024 found that over half of the UK public and three-quarters of NHS staff support the application of AI in patient care. However, concerns regarding regulatory oversight persist, particularly among General Practitioners (GPs), with a significant percentage expressing worries about the reliability of AI outputs.
As the UK’s AI market is projected to reach £1 trillion by 2035, with health and social care anticipated to experience the most substantial job growth, the regulatory landscape is more critical than ever. The MHRA, an executive agency of the Department of Health and Social Care, is tasked with ensuring that all medicines and medical devices in the UK are effective and adequately safe, grounded in robust, fact-based evaluations.
As this call for evidence unfolds, the active involvement of diverse stakeholders will be vital in shaping a regulatory framework that not only safeguards public health but also encourages technological advancement in a sector poised for significant transformation.
See also
Trump Announces AI Executive Order to Ensure US Global Dominance and Minimize State Regulations
AI Use in Employment Decision-Making Surges to 80% Amid Legal Risks and Compliance Concerns
ISACA Reveals Key AI Governance Lessons from 2025 to Enhance Safety and Trust in 2026
Trump’s AI Executive Order Sparks State Backlash, Challenges Local Regulations
TCS Launches Intelligent Urban Exchange™ for Enhanced ESG and CSRD Compliance Management


















































