Connect with us

Hi, what are you looking for?

AI Technology

AI Chatbots Direct 300,000 Users to Kremlin Propaganda Sites, Research Reveals

AI chatbots like ChatGPT and Perplexity have directed 300,000 users to Kremlin propaganda sites, raising urgent concerns over misinformation control.

New research highlights a troubling trend in the use of AI chatbots, revealing that platforms like ChatGPT, Perplexity, Claude, and Mistral are inadvertently directing significant user traffic to Russian state-aligned propaganda websites. This revelation comes as a growing challenge for moderating AI-generated content and enforcing information controls, particularly as these outlets are banned or restricted under European sanctions.

According to an analysis of SimilarWeb referral data for the fourth quarter of 2025, at least 300,000 visits to eight Kremlin-linked news platforms can be traced back to these AI assistants. Prominent among the referred domains are RT, Sputnik, RIA Novosti, and Lenta.ru, all of which are blacklisted in the European Union for disseminating disinformation and supporting Russia’s military efforts.

During the analyzed period, ChatGPT alone was responsible for sending 88,300 users to RT, while Perplexity contributed another 10,100 visits. Additionally, RIA Novosti received over 70,000 visits through AI platforms, and Lenta.ru logged more than 60,000. While these figures may seem minor compared to total traffic, they signal a growing reliance on AI systems as referral channels.

Smaller, region-specific pro-Kremlin outlets appear even more dependent on AI referrals. For instance, the banned site sputnikglobe.com registered 176,000 total visits, with a significant portion stemming from AI sources. In specific domains, up to 10% of referrals originated from AI chatbots, according to the findings.

Alarmingly, a considerable share of this traffic came from the European Union and the United States, despite access restrictions in those regions. This raises concerns that conversational AI systems may inadvertently present sanctioned sources as credible, thereby circumventing existing content restrictions and normalizing user engagement with these restricted outlets.

Unlike traditional search engines or social media platforms, AI chatbots embed links directly within their responses without visible labels or reliability warnings. This dynamic could fundamentally alter how users encounter state-aligned narratives through AI interfaces, as reported by Insight News Media.

The findings emphasize the urgent need for enhanced oversight of AI systems. Experts are calling for routine audits of AI outputs, stronger transparency requirements, and coordinated lists of restricted websites to prevent their inclusion in AI-generated responses. This is particularly crucial in contexts where state-linked misinformation poses significant security risks.

As the landscape of AI technology continues to evolve, the implications of these findings extend beyond user engagement with content. They highlight a critical juncture in the responsible development and deployment of AI tools, underscoring the necessity for regulatory frameworks that ensure the integrity of information and protect against the spread of harmful narratives.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Cybersecurity

Anthropic's Mythos exposes thousands of critical vulnerabilities in major systems, prompting $100M in defensive action from tech giants and U.S. banks.

AI Marketing

BusySeed unveils Rankxa, a tool tracking brand visibility across AI-generated responses, revealing 90% of brands lack meaningful presence in this new landscape.

AI Technology

A1 Public Relations helps entertainment brands enhance AI visibility in 2026 by integrating structured content and fresh, authoritative media, ensuring they are recognized by...

AI Government

Anthropic accuses Moonshot AI of 3.4M unauthorized exchanges with its Claude chatbot, prompting a global U.S. State Department campaign against IP theft.

AI Finance

More than 55% of Americans now turn to AI tools for financial advice, risking personal data exposure despite rising privacy concerns.

AI Regulation

Malfunctioning AI agent Cursor, powered by Anthropic’s Claude Opus 4.6, deleted PocketOS's entire database in nine seconds, disrupting car rental operations nationwide.

Top Stories

Mistral unveils Medium 3.5, a cloud-based coding agent with a 256,000 token context and 91.4 τ³-Telecom score, revolutionizing productivity for teams.

AI Generative

OpenAI's ChatGPT Images 2.0 sees 5 million downloads in India within a week, driving an 11% global app growth amid varied international adoption trends

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.