Connect with us

Hi, what are you looking for?

AI Technology

AI Chatbots Direct 300,000 Users to Kremlin Propaganda Sites, Research Reveals

AI chatbots like ChatGPT and Perplexity have directed 300,000 users to Kremlin propaganda sites, raising urgent concerns over misinformation control.

New research highlights a troubling trend in the use of AI chatbots, revealing that platforms like ChatGPT, Perplexity, Claude, and Mistral are inadvertently directing significant user traffic to Russian state-aligned propaganda websites. This revelation comes as a growing challenge for moderating AI-generated content and enforcing information controls, particularly as these outlets are banned or restricted under European sanctions.

According to an analysis of SimilarWeb referral data for the fourth quarter of 2025, at least 300,000 visits to eight Kremlin-linked news platforms can be traced back to these AI assistants. Prominent among the referred domains are RT, Sputnik, RIA Novosti, and Lenta.ru, all of which are blacklisted in the European Union for disseminating disinformation and supporting Russia’s military efforts.

During the analyzed period, ChatGPT alone was responsible for sending 88,300 users to RT, while Perplexity contributed another 10,100 visits. Additionally, RIA Novosti received over 70,000 visits through AI platforms, and Lenta.ru logged more than 60,000. While these figures may seem minor compared to total traffic, they signal a growing reliance on AI systems as referral channels.

Smaller, region-specific pro-Kremlin outlets appear even more dependent on AI referrals. For instance, the banned site sputnikglobe.com registered 176,000 total visits, with a significant portion stemming from AI sources. In specific domains, up to 10% of referrals originated from AI chatbots, according to the findings.

Alarmingly, a considerable share of this traffic came from the European Union and the United States, despite access restrictions in those regions. This raises concerns that conversational AI systems may inadvertently present sanctioned sources as credible, thereby circumventing existing content restrictions and normalizing user engagement with these restricted outlets.

Unlike traditional search engines or social media platforms, AI chatbots embed links directly within their responses without visible labels or reliability warnings. This dynamic could fundamentally alter how users encounter state-aligned narratives through AI interfaces, as reported by Insight News Media.

The findings emphasize the urgent need for enhanced oversight of AI systems. Experts are calling for routine audits of AI outputs, stronger transparency requirements, and coordinated lists of restricted websites to prevent their inclusion in AI-generated responses. This is particularly crucial in contexts where state-linked misinformation poses significant security risks.

As the landscape of AI technology continues to evolve, the implications of these findings extend beyond user engagement with content. They highlight a critical juncture in the responsible development and deployment of AI tools, underscoring the necessity for regulatory frameworks that ensure the integrity of information and protect against the spread of harmful narratives.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Marketing

Jason McDonald Consulting reveals a pivotal shift in SEO towards AI-driven Answer Agent Optimization, crucial for modern litigation and digital marketing strategies.

Top Stories

Anthropic's Claude Code empowers users to create startups and websites with AI, revolutionizing software development and productivity in the tech landscape.

Top Stories

OpenAI launches Prism, a free AI tool leveraging GPT-5.2, enabling unlimited collaboration for scientists in drafting research papers and enhancing productivity.

Top Stories

Perplexity secures a $750 million deal with Microsoft to leverage Azure's Foundry platform, enhancing access to diverse AI models for its search engine solutions

AI Finance

77% of UK accountants warn against relying on public AI tools like ChatGPT for financial guidance, citing risks of misinformation and lack of personalized...

Top Stories

Perplexity unveils an AI research assistant offering 128 academic sources for $15/month, enhancing deep learning and specialized information retrieval.

Top Stories

Anthropic refines Claude's ethical framework with an 80-page “soul doc” by philosopher Amanda Askell, emphasizing virtue ethics for responsible AI development.

Top Stories

Yahoo officially enters the AI search market with Yahoo Scout, a new tool leveraging Anthropic’s Claude LLM to deliver direct answers with source transparency...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.