Connect with us

Hi, what are you looking for?

Top Stories

ChatGPT 5.2 Cites Elon Musk’s Grokipedia, Raising Misinformation Concerns

OpenAI’s GPT-5.2 cites Elon Musk’s Grokipedia multiple times, raising alarms about misinformation as it references unverified claims across diverse topics.

The latest iteration of OpenAI’s ChatGPT, known as GPT-5.2, has begun referencing Elon Musk’s Grokipedia in responses across various topics, sparking concerns over potential misinformation. Tests conducted by The Guardian revealed that the model cited Grokipedia nine times while addressing over a dozen distinct queries, including issues related to Iranian organizations and the biography of British historian Sir Richard Evans, who has been an outspoken critic of Holocaust denial.

Launched in October, Grokipedia is an AI-generated encyclopedia aiming to rival Wikipedia, though it has faced criticism for promoting right-wing narratives on contentious subjects, such as gay marriage and the January 6 insurrection in the U.S. Unlike Wikipedia, Grokipedia lacks direct human editing; instead, it relies on AI to generate content and accommodate change requests.

Interestingly, when specifically asked to repeat disinformation related to the January 6 insurrection or media bias against former President Donald Trump, ChatGPT did not reference Grokipedia. However, the encyclopedia’s content surfaced in responses to more obscure inquiries. For example, ChatGPT, citing Grokipedia, made more assertive claims regarding the Iranian government’s connections to MTN-Irancell, suggesting relationships with Iran’s supreme leader that are not found in Wikipedia.

Moreover, the model echoed Grokipedia’s debunked assertions about Sir Richard Evans’ contributions as an expert witness in David Irving’s libel trial. Notably, GPT-5.2 is not unique in this regard; reports indicate that Anthropic’s Claude has also referred to Grokipedia for diverse topics, including petroleum production and Scottish ales.

In response to the growing concerns, a spokesperson for OpenAI noted that the model’s web search aims to incorporate a broad spectrum of publicly available sources and viewpoints. “We apply safety filters to reduce the risk of surfacing links associated with high-severity harms,” the spokesperson stated, emphasizing that ChatGPT indicates sources that informed its responses. OpenAI also mentioned ongoing efforts to screen out low-credibility information and influence campaigns.

Despite these assurances, the infiltration of Grokipedia’s material into LLM responses raises alarms for disinformation researchers. Security experts had previously warned that malicious actors, including Russian propaganda networks, were creating vast quantities of misinformation to “groom” AI models, a process that can compromise the integrity of AI-generated responses. In June, U.S. Congressional concerns were raised when Google’s Gemini allegedly echoed the Chinese government’s views on human rights abuses in Xinjiang and its COVID-19 policies.

Nina Jankowicz, a disinformation researcher focused on LLM grooming, expressed apprehensions regarding ChatGPT’s use of Grokipedia as a citation. Even if Musk did not intend to influence LLMs, Jankowicz pointed out that Grokipedia entries often rely on sources that are “untrustworthy at best, poorly sourced and deliberate disinformation at worst.” She cautioned that AI models citing Grokipedia could inadvertently enhance its perceived credibility among users.

Jankowicz noted that users may mistakenly believe that a citation from a trusted AI model implies vetting of the information, potentially leading them to rely on sources like Grokipedia for news. The challenge of removing erroneous information once it has been incorporated into an AI chatbot remains significant. Jankowicz recounted an experience where a news outlet incorrectly quoted her in an article on disinformation. Although the outlet removed the quote after her request, AI models continued to reference it as genuine, highlighting the difficulty of correcting misinformation.

Addressing the situation, a spokesperson for xAI, which owns Grokipedia, stated, “Legacy media lies.” This response underscores the contentious environment surrounding the dissemination of information in the digital age, where the boundaries of truth and credibility are increasingly blurred.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Cybersecurity

OpenAI acquires AI security startup Promptfoo to bolster enterprise systems against vulnerabilities, enhancing protection for 25% of Fortune 500 AI users.

AI Regulation

OpenAI's lawsuit over unreported violent activity raises AI safety concerns, pressuring Microsoft's stock (MSFT) down 0.9% amid potential compliance costs.

Top Stories

Meta acquires Moltbook, enhancing AI agents' capabilities as businesses seek innovative solutions in a rapidly evolving tech landscape.

AI Research

Study reveals that AI models from OpenAI, Google, and xAI increasingly comply with academic misconduct requests, raising ethical concerns in academia.

AI Government

Over 30 OpenAI and Google DeepMind employees, including chief scientist Jeff Dean, back Anthropic’s legal battle against the Pentagon's blacklist, warning of industry-wide repercussions.

Top Stories

AI content creation market surges to $10B by 2033, fueled by OpenAI and major tech giants, as demand for automated digital content skyrockets.

AI Technology

Nvidia partners with Thinking Machines Lab to supply over one gigawatt of Vera Rubin processors, boosting AI capabilities and innovation across organizations.

AI Research

Deep learning is revolutionizing clinical trials by streamlining processes with AI tools like TrialMind and LEADS, significantly cutting literature review time from over a...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.