Connect with us

Hi, what are you looking for?

Top Stories

ChatGPT 5.2 Cites Elon Musk’s Grokipedia, Raising Misinformation Concerns

OpenAI’s GPT-5.2 cites Elon Musk’s Grokipedia multiple times, raising alarms about misinformation as it references unverified claims across diverse topics.

The latest iteration of OpenAI’s ChatGPT, known as GPT-5.2, has begun referencing Elon Musk’s Grokipedia in responses across various topics, sparking concerns over potential misinformation. Tests conducted by The Guardian revealed that the model cited Grokipedia nine times while addressing over a dozen distinct queries, including issues related to Iranian organizations and the biography of British historian Sir Richard Evans, who has been an outspoken critic of Holocaust denial.

Launched in October, Grokipedia is an AI-generated encyclopedia aiming to rival Wikipedia, though it has faced criticism for promoting right-wing narratives on contentious subjects, such as gay marriage and the January 6 insurrection in the U.S. Unlike Wikipedia, Grokipedia lacks direct human editing; instead, it relies on AI to generate content and accommodate change requests.

Interestingly, when specifically asked to repeat disinformation related to the January 6 insurrection or media bias against former President Donald Trump, ChatGPT did not reference Grokipedia. However, the encyclopedia’s content surfaced in responses to more obscure inquiries. For example, ChatGPT, citing Grokipedia, made more assertive claims regarding the Iranian government’s connections to MTN-Irancell, suggesting relationships with Iran’s supreme leader that are not found in Wikipedia.

Moreover, the model echoed Grokipedia’s debunked assertions about Sir Richard Evans’ contributions as an expert witness in David Irving’s libel trial. Notably, GPT-5.2 is not unique in this regard; reports indicate that Anthropic’s Claude has also referred to Grokipedia for diverse topics, including petroleum production and Scottish ales.

In response to the growing concerns, a spokesperson for OpenAI noted that the model’s web search aims to incorporate a broad spectrum of publicly available sources and viewpoints. “We apply safety filters to reduce the risk of surfacing links associated with high-severity harms,” the spokesperson stated, emphasizing that ChatGPT indicates sources that informed its responses. OpenAI also mentioned ongoing efforts to screen out low-credibility information and influence campaigns.

Despite these assurances, the infiltration of Grokipedia’s material into LLM responses raises alarms for disinformation researchers. Security experts had previously warned that malicious actors, including Russian propaganda networks, were creating vast quantities of misinformation to “groom” AI models, a process that can compromise the integrity of AI-generated responses. In June, U.S. Congressional concerns were raised when Google’s Gemini allegedly echoed the Chinese government’s views on human rights abuses in Xinjiang and its COVID-19 policies.

Nina Jankowicz, a disinformation researcher focused on LLM grooming, expressed apprehensions regarding ChatGPT’s use of Grokipedia as a citation. Even if Musk did not intend to influence LLMs, Jankowicz pointed out that Grokipedia entries often rely on sources that are “untrustworthy at best, poorly sourced and deliberate disinformation at worst.” She cautioned that AI models citing Grokipedia could inadvertently enhance its perceived credibility among users.

Jankowicz noted that users may mistakenly believe that a citation from a trusted AI model implies vetting of the information, potentially leading them to rely on sources like Grokipedia for news. The challenge of removing erroneous information once it has been incorporated into an AI chatbot remains significant. Jankowicz recounted an experience where a news outlet incorrectly quoted her in an article on disinformation. Although the outlet removed the quote after her request, AI models continued to reference it as genuine, highlighting the difficulty of correcting misinformation.

Addressing the situation, a spokesperson for xAI, which owns Grokipedia, stated, “Legacy media lies.” This response underscores the contentious environment surrounding the dissemination of information in the digital age, where the boundaries of truth and credibility are increasingly blurred.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

YouTube introduces AI-generated content tools allowing creators to use their own faces and voices, enhancing originality while tackling deepfake risks.

Top Stories

Dan Loeb's Third Point boosts stakes in Microsoft by 175% and Meta by 47%, signaling aggressive investment in AI amid 2025 tech volatility.

AI Technology

Chinese gaming giants miHoYo and 37 Interactive strategically invest in AI leaders Zhipu and MiniMax, marking a pivotal moment for China’s public LLM market.

Top Stories

Clarion Ledger's site overhaul demands the latest browsers for optimal speed and usability, risking access for users with outdated technology.

Top Stories

OpenAI partners with Leidos to integrate AI tools into federal operations, targeting enhanced efficiency and effectiveness with a $200M Pentagon contract.

Top Stories

DeepSeek emerges as a formidable Chinese AI competitor, challenging Western dominance with models offering substantial performance at a fraction of the cost.

Top Stories

Meta announces the launch of Meta Compute, aiming for 600 gigawatts of AI infrastructure by 2030 to compete with Google and OpenAI.

Top Stories

Apple is set to launch a revamped Siri, dubbed "Campos," as a conversational AI assistant in September, enhancing user interactions and privacy while integrating...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.