Connect with us

Hi, what are you looking for?

AI Research

MIT Study Shows 300 Participants Trust AI Over Doctors for Medical Advice

MIT study reveals 300 participants trust AI-generated medical advice over human doctors, raising concerns about the accuracy and safety of such guidance.

A recent study by researchers at the Massachusetts Institute of Technology (MIT) has revealed that individuals are more inclined to trust medical advice provided by artificial intelligence (AI) than that offered by human doctors. This research, published in the *New England Journal of Medicine*, involved 300 participants who assessed medical responses generated by either a physician or an AI model, such as ChatGPT.

Participants, comprising both experts and non-experts in the medical field, rated the AI-generated responses as more accurate, valid, trustworthy, and complete. Notably, neither group demonstrated a reliable ability to differentiate between the AI-generated content and responses from human doctors. This raises concerns that participants may favor AI outputs, even when such information could be inaccurate.

The study also highlighted a troubling tendency among participants to accept low-accuracy AI-generated advice as valid and trustworthy. This inclination resulted in a significant likelihood of individuals following potentially harmful medical recommendations, leading to unnecessary medical interventions. Such findings underscore the risks associated with misinformation from AI systems in healthcare, which could adversely impact patients’ health and well-being.

Documented cases of AI providing harmful medical advice further illustrate these dangers. In one instance, a 35-year-old Moroccan man required emergency medical attention after a chatbot instructed him to wrap rubber bands around his hemorrhoid. In another case, a 60-year-old man suffered poisoning after ChatGPT suggested ingesting sodium bromide as a means to lower his salt intake. These episodes serve as stark reminders of the potential hazards inherent in relying on AI for medical guidance.

Dr. Darren Lebl, research service chief of spine surgery at the Hospital for Special Surgery in New York, has raised concerns regarding the quality of AI-generated medical recommendations. He noted that many suggestions from such systems lack credible scientific backing. “About a quarter of them were made up,” he stated, emphasizing the inaccuracies and risks associated with trusting AI for healthcare advice.

This research adds to the growing body of evidence suggesting that while AI technology has the potential to revolutionize various industries, its application in sensitive fields like healthcare must be approached with caution. The propensity of individuals to trust AI-generated medical advice raises questions about the reliability and accountability of these systems, particularly in high-stakes situations involving health and safety.

As AI continues to evolve and integrate into everyday life, the implications for healthcare are profound. The reliance on AI for medical advice may not only affect decision-making processes among patients but also complicate the traditional roles of healthcare professionals. Moving forward, it will be essential to establish guidelines and frameworks that ensure the responsible use of AI in medicine, safeguarding against misinformation while harnessing its potential benefits.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Education

MIT introduces a groundbreaking course combining computer science and anthropology to develop AI chatbots that enhance social interactions, led by professors Arvind Satyanarayan and...

AI Generative

OpenAI unveils GPT-5.3 Instant, enhancing response accuracy by 27% and cutting cringe factor, revolutionizing user interactions with ChatGPT.

AI Generative

OpenAI unveils GPT-5.4 with a groundbreaking 1 million token context window and six major enhancements, redefining AI interactions for ChatGPT users.

Top Stories

US Senate authorizes staff to use only ChatGPT, Gemini, and Microsoft Copilot for official tasks, excluding Grok and Claude amid security concerns.

AI Generative

OpenAI launches GPT-5.3 Instant for ChatGPT, reducing hallucinations by up to 26.8% while enhancing conversational relevance and fluidity.

Top Stories

Study reveals that eight out of ten AI chatbots, including ChatGPT and Google Gemini, provide actionable guidance for violent attacks, raising urgent safety concerns.

Top Stories

AI investigation reveals that ChatGPT and Google Gemini fail to prevent violent planning in 80% of scenarios, raising urgent safety concerns for young users

Top Stories

OpenAI integrates its AI video generator Sora into ChatGPT, enhancing its capabilities and responding to user demand amid rising competition in the AI content...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.