Connect with us

Hi, what are you looking for?

AI Generative

AI Chatbot News Experiment Reveals 18% Fabricated Sources and 47% Accuracy Rate

Generative AI tools, including Google’s Gemini, produced 18% fabricated sources and only 47% accuracy in summarizing Québec news, raising serious reliability concerns.

A recent experiment involving generative AI systems has raised questions about the reliability of AI-generated news content. A journalism professor specializing in computer science reported that AI tools, including Google’s Gemini, produced numerous inaccuracies and even fabricated sources while attempting to summarize current events in Québec. This analysis, which spanned a month, revealed that 18% of the AI-generated news items relied on non-existent sources, such as a fictional outlet named fake-example.ca.

In a bid to explore how well these tools could convey important news, the professor queried seven different generative AI systems daily, seeking the five most significant news events in Québec. The tools included both paid options, like ChatGPT and Claude, and free versions such as DeepSeek and Grok. Each response required a summary in three sentences, a title, and a source link, with the expectation that the AI would draw primarily from credible news sources.

However, the results were alarming. Although most responses cited news outlets, many included URLs that led to 404 errors or merely pointed to the homepage of the cited media. Only 37% of the responses provided complete and legitimate URLs, making it challenging to verify the accuracy of the information presented. Overall, the summaries were accurate only 47% of the time, with some instances of outright plagiarism.

The inaccuracies noted in the AI-generated content were particularly concerning. For instance, Grok, an AI tool from Elon Musk’s social network X, reported a fabricated narrative about asylum seekers being mistreated in Chibougamau. This mischaracterization was based on a legitimate article from La Presse that described a successful relocation of asylum seekers, most of whom received job offers. Such significant misinterpretations exemplified the potential dangers of relying on AI for news.

Other notable inaccuracies included claims about the circumstances of a toddler found alive after a four-day search, which Grok incorrectly attributed to the mother abandoning her child for a vacation. Moreover, Aria mistakenly reported that French cyclist Julian Alaphilippe had won a race in Montréal, when in fact, he won a different race in Québec City. These errors illustrate a broader trend where AI tools generate content that lacks foundational accuracy.

Further compounding the issue were grammatical errors in the French language responses, which the professor speculated might have been reduced if the queries had been posed in English. Of the verified responses, approximately 45% were classified as only partially accurate, often due to misinterpretations that could not be deemed wholly unreliable.

Generative conclusions made by the AI systems also raised eyebrows. In several instances, the tools made unsupported claims or introduced non-existent debates related to reported stories. For example, ChatGPT concluded that an accident near Québec City “has reignited the debate on road safety in rural areas,” although no such discussion was present in the reference article. This tendency to fabricate context or conclusions presents a significant risk of misinformation.

The findings echoed a subsequent report by 22 public service media organizations, which noted that nearly half of all AI responses contained significant issues and a third showed serious sourcing problems. As the use of generative AI tools in news reporting grows, experts urge caution. The expectation for accuracy and reliability remains paramount, yet the current capabilities of these AI systems fall short of delivering factual information consistently.

As the landscape of news consumption evolves, the implications of these findings are profound. Stakeholders in journalism and technology must confront the challenges posed by generative AI tools to ensure that the integrity of information is maintained, safeguarding the public’s access to reliable news sources.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Malaysia blocks Elon Musk's Grok chatbot after it generates pornographic content, highlighting inadequate safeguards against AI misuse and child exploitation.

Top Stories

China's fashion industry is set to leverage AI for a $1.75B market by 2025, boosting efficiency and innovation amid a transformative digital shift.

AI Government

Canadian officials are in active discussions to address X.com's Grok AI chatbot, which is generating child sexual abuse material, amidst growing global scrutiny.

Top Stories

AI adoption skyrockets with 1 billion users, but only 33% of organizations scale effectively, highlighting urgent need for responsible governance and ethical frameworks.

Top Stories

Google retracts misleading AI health summaries after revealing inaccuracies in liver blood test information, raising concerns over patient misinterpretation.

Top Stories

Chinese companies unveiled over 100,000 innovations at CES 2026, with Alibaba and DeepSeek leading a major push for open-source AI collaboration.

AI Government

X's Grok tool faces scrutiny as the platform blocks 3,500 obscene posts and deletes 600 accounts in response to compliance demands from India

AI Regulation

Law firms must adopt Generative and Answer Engine Optimization strategies to remain competitive in 2026, prioritizing high-quality, citation-worthy content.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.