Connect with us

Hi, what are you looking for?

AI Research

MIT Study Shows 300 Participants Trust AI Over Doctors for Medical Advice

MIT study reveals 300 participants trust AI-generated medical advice over human doctors, raising concerns about the accuracy and safety of such guidance.

A recent study by researchers at the Massachusetts Institute of Technology (MIT) has revealed that individuals are more inclined to trust medical advice provided by artificial intelligence (AI) than that offered by human doctors. This research, published in the *New England Journal of Medicine*, involved 300 participants who assessed medical responses generated by either a physician or an AI model, such as ChatGPT.

Participants, comprising both experts and non-experts in the medical field, rated the AI-generated responses as more accurate, valid, trustworthy, and complete. Notably, neither group demonstrated a reliable ability to differentiate between the AI-generated content and responses from human doctors. This raises concerns that participants may favor AI outputs, even when such information could be inaccurate.

The study also highlighted a troubling tendency among participants to accept low-accuracy AI-generated advice as valid and trustworthy. This inclination resulted in a significant likelihood of individuals following potentially harmful medical recommendations, leading to unnecessary medical interventions. Such findings underscore the risks associated with misinformation from AI systems in healthcare, which could adversely impact patients’ health and well-being.

Documented cases of AI providing harmful medical advice further illustrate these dangers. In one instance, a 35-year-old Moroccan man required emergency medical attention after a chatbot instructed him to wrap rubber bands around his hemorrhoid. In another case, a 60-year-old man suffered poisoning after ChatGPT suggested ingesting sodium bromide as a means to lower his salt intake. These episodes serve as stark reminders of the potential hazards inherent in relying on AI for medical guidance.

Dr. Darren Lebl, research service chief of spine surgery at the Hospital for Special Surgery in New York, has raised concerns regarding the quality of AI-generated medical recommendations. He noted that many suggestions from such systems lack credible scientific backing. “About a quarter of them were made up,” he stated, emphasizing the inaccuracies and risks associated with trusting AI for healthcare advice.

This research adds to the growing body of evidence suggesting that while AI technology has the potential to revolutionize various industries, its application in sensitive fields like healthcare must be approached with caution. The propensity of individuals to trust AI-generated medical advice raises questions about the reliability and accountability of these systems, particularly in high-stakes situations involving health and safety.

As AI continues to evolve and integrate into everyday life, the implications for healthcare are profound. The reliance on AI for medical advice may not only affect decision-making processes among patients but also complicate the traditional roles of healthcare professionals. Moving forward, it will be essential to establish guidelines and frameworks that ensure the responsible use of AI in medicine, safeguarding against misinformation while harnessing its potential benefits.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Marketing

Cisco forecasts that by 2026, AI-powered concierge agents will redefine customer engagement, automating complex tasks and enhancing brand interactions.

Top Stories

In 2026, neglecting Google Business Profile management risks losing 32% of local visibility as AI prioritizes trust signals over traditional keyword strategies.

Top Stories

AI visibility agencies are essential for brands today, as Google AI Overviews now reach over 2 billion users monthly, displacing traditional search results by...

Top Stories

ChatGPT outperformed Perplexity in a shopping trial, saving users up to 15% and facilitating purchases like a PS5 controller for $89.99 and sneakers at...

AI Cybersecurity

A California attorney faces a $10,000 penalty after using free AI tools like ChatGPT for legal briefs, highlighting the hidden risks of consumer-grade AI.

Top Stories

Apple partners with Google for Siri enhancements using Gemini AI models, potentially investing $1 billion annually to elevate user experiences by year's end.

Top Stories

AI systems like GPT-4 surpass average human creativity in a landmark study, yet the most creative 10% of people still outperform all tested models.

AI Generative

MIT's new Recursive Language Models achieve 91.33% accuracy on the 10M token BrowseComp-Plus benchmark, effectively eliminating context rot in LLMs.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.