Connect with us

Hi, what are you looking for?

AI Generative

AI Chatbot News Experiment Reveals 18% Fabricated Sources and 47% Accuracy Rate

Generative AI tools, including Google’s Gemini, produced 18% fabricated sources and only 47% accuracy in summarizing Québec news, raising serious reliability concerns.

A recent experiment involving generative AI systems has raised questions about the reliability of AI-generated news content. A journalism professor specializing in computer science reported that AI tools, including Google’s Gemini, produced numerous inaccuracies and even fabricated sources while attempting to summarize current events in Québec. This analysis, which spanned a month, revealed that 18% of the AI-generated news items relied on non-existent sources, such as a fictional outlet named fake-example.ca.

In a bid to explore how well these tools could convey important news, the professor queried seven different generative AI systems daily, seeking the five most significant news events in Québec. The tools included both paid options, like ChatGPT and Claude, and free versions such as DeepSeek and Grok. Each response required a summary in three sentences, a title, and a source link, with the expectation that the AI would draw primarily from credible news sources.

However, the results were alarming. Although most responses cited news outlets, many included URLs that led to 404 errors or merely pointed to the homepage of the cited media. Only 37% of the responses provided complete and legitimate URLs, making it challenging to verify the accuracy of the information presented. Overall, the summaries were accurate only 47% of the time, with some instances of outright plagiarism.

The inaccuracies noted in the AI-generated content were particularly concerning. For instance, Grok, an AI tool from Elon Musk’s social network X, reported a fabricated narrative about asylum seekers being mistreated in Chibougamau. This mischaracterization was based on a legitimate article from La Presse that described a successful relocation of asylum seekers, most of whom received job offers. Such significant misinterpretations exemplified the potential dangers of relying on AI for news.

Other notable inaccuracies included claims about the circumstances of a toddler found alive after a four-day search, which Grok incorrectly attributed to the mother abandoning her child for a vacation. Moreover, Aria mistakenly reported that French cyclist Julian Alaphilippe had won a race in Montréal, when in fact, he won a different race in Québec City. These errors illustrate a broader trend where AI tools generate content that lacks foundational accuracy.

Further compounding the issue were grammatical errors in the French language responses, which the professor speculated might have been reduced if the queries had been posed in English. Of the verified responses, approximately 45% were classified as only partially accurate, often due to misinterpretations that could not be deemed wholly unreliable.

Generative conclusions made by the AI systems also raised eyebrows. In several instances, the tools made unsupported claims or introduced non-existent debates related to reported stories. For example, ChatGPT concluded that an accident near Québec City “has reignited the debate on road safety in rural areas,” although no such discussion was present in the reference article. This tendency to fabricate context or conclusions presents a significant risk of misinformation.

The findings echoed a subsequent report by 22 public service media organizations, which noted that nearly half of all AI responses contained significant issues and a third showed serious sourcing problems. As the use of generative AI tools in news reporting grows, experts urge caution. The expectation for accuracy and reliability remains paramount, yet the current capabilities of these AI systems fall short of delivering factual information consistently.

As the landscape of news consumption evolves, the implications of these findings are profound. Stakeholders in journalism and technology must confront the challenges posed by generative AI tools to ensure that the integrity of information is maintained, safeguarding the public’s access to reliable news sources.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Finance

70% of finance teams in Australia and New Zealand use shadow AI tools like ChatGPT, risking data governance with only 16% confident in data...

AI Generative

InVideo launches an AI video generator powered by over 200 models, enabling complete video creation for just $28 a month, streamlining content production for...

AI Research

Mayo Clinic's Evo 2 AI model analyzes 128,000 genomes to identify cancer-causing mutations, revolutionizing early diagnosis and precision medicine.

AI Education

Khan Academy, ETS, and TED launch the Khan TED Institute, aiming to redefine higher education with tuition under $10,000 and skills aligned with top...

Top Stories

Google's Gemini AI model claims 91% accuracy, yet it generates tens of millions of errors annually, raising alarms about misinformation in search results

AI Technology

University of Nebraska–Lincoln's inaugural Husker AI Days, featuring Google, Microsoft, and OpenAI, aims to enhance AI accessibility with hands-on workshops and a Senior Design...

Top Stories

Google DeepMind hires philosopher Henry Shevlin to guide ethical AI development and explore machine consciousness as AGI approaches reality

AI Education

CoreWeave secures a $21 billion deal with Meta and partners with Anthropic to enhance AI model deployment, responding to skyrocketing demand for compute capacity.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.