Connect with us

Hi, what are you looking for?

Top Stories

ChatGPT 5.2 Cites Elon Musk’s Grokipedia, Raising Misinformation Concerns

OpenAI’s GPT-5.2 cites Elon Musk’s Grokipedia multiple times, raising alarms about misinformation as it references unverified claims across diverse topics.

The latest iteration of OpenAI’s ChatGPT, known as GPT-5.2, has begun referencing Elon Musk’s Grokipedia in responses across various topics, sparking concerns over potential misinformation. Tests conducted by The Guardian revealed that the model cited Grokipedia nine times while addressing over a dozen distinct queries, including issues related to Iranian organizations and the biography of British historian Sir Richard Evans, who has been an outspoken critic of Holocaust denial.

Launched in October, Grokipedia is an AI-generated encyclopedia aiming to rival Wikipedia, though it has faced criticism for promoting right-wing narratives on contentious subjects, such as gay marriage and the January 6 insurrection in the U.S. Unlike Wikipedia, Grokipedia lacks direct human editing; instead, it relies on AI to generate content and accommodate change requests.

Interestingly, when specifically asked to repeat disinformation related to the January 6 insurrection or media bias against former President Donald Trump, ChatGPT did not reference Grokipedia. However, the encyclopedia’s content surfaced in responses to more obscure inquiries. For example, ChatGPT, citing Grokipedia, made more assertive claims regarding the Iranian government’s connections to MTN-Irancell, suggesting relationships with Iran’s supreme leader that are not found in Wikipedia.

Moreover, the model echoed Grokipedia’s debunked assertions about Sir Richard Evans’ contributions as an expert witness in David Irving’s libel trial. Notably, GPT-5.2 is not unique in this regard; reports indicate that Anthropic’s Claude has also referred to Grokipedia for diverse topics, including petroleum production and Scottish ales.

In response to the growing concerns, a spokesperson for OpenAI noted that the model’s web search aims to incorporate a broad spectrum of publicly available sources and viewpoints. “We apply safety filters to reduce the risk of surfacing links associated with high-severity harms,” the spokesperson stated, emphasizing that ChatGPT indicates sources that informed its responses. OpenAI also mentioned ongoing efforts to screen out low-credibility information and influence campaigns.

Despite these assurances, the infiltration of Grokipedia’s material into LLM responses raises alarms for disinformation researchers. Security experts had previously warned that malicious actors, including Russian propaganda networks, were creating vast quantities of misinformation to “groom” AI models, a process that can compromise the integrity of AI-generated responses. In June, U.S. Congressional concerns were raised when Google’s Gemini allegedly echoed the Chinese government’s views on human rights abuses in Xinjiang and its COVID-19 policies.

Nina Jankowicz, a disinformation researcher focused on LLM grooming, expressed apprehensions regarding ChatGPT’s use of Grokipedia as a citation. Even if Musk did not intend to influence LLMs, Jankowicz pointed out that Grokipedia entries often rely on sources that are “untrustworthy at best, poorly sourced and deliberate disinformation at worst.” She cautioned that AI models citing Grokipedia could inadvertently enhance its perceived credibility among users.

Jankowicz noted that users may mistakenly believe that a citation from a trusted AI model implies vetting of the information, potentially leading them to rely on sources like Grokipedia for news. The challenge of removing erroneous information once it has been incorporated into an AI chatbot remains significant. Jankowicz recounted an experience where a news outlet incorrectly quoted her in an article on disinformation. Although the outlet removed the quote after her request, AI models continued to reference it as genuine, highlighting the difficulty of correcting misinformation.

Addressing the situation, a spokesperson for xAI, which owns Grokipedia, stated, “Legacy media lies.” This response underscores the contentious environment surrounding the dissemination of information in the digital age, where the boundaries of truth and credibility are increasingly blurred.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

Cerebras targets a $35 billion IPO ahead of OpenAI, fueled by a $20 billion partnership and innovative wafer-scale chips promising 15x faster AI inference.

Top Stories

OpenAI's new non-exclusive deal with Microsoft allows access to other cloud providers, while 45% of Microsoft's AI backlog remains tied to OpenAI.

AI Finance

OpenAI caps revenue share to Microsoft at 20% while expanding cloud access, enabling sales growth across competitors like Amazon and Google by 2030.

AI Regulation

Gavel launches Gavel Exec for Web, a browser-based AI contract platform featuring 93% search accuracy and batch analysis for enhanced legal efficiency.

Top Stories

Elon Musk's $134 billion lawsuit against OpenAI over its shift to a profit model goes to trial, potentially reshaping AI governance and ethics.

Top Stories

OpenAI CEO Sam Altman publicly apologizes for failing to report troubling chatbot interactions linked to a mass shooting that killed eight in Tumbler Ridge.

Top Stories

BigScoots implements a new human verification process to significantly enhance website security against automated bot attacks and improve user experience.

AI Technology

AI-generated content skyrockets as TikTok users upload over 1.3 billion AI-created videos, raising urgent concerns over authenticity and content integrity.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.