Connect with us

Hi, what are you looking for?

AI Technology

AI Chatbots Direct 300,000 Users to Kremlin Propaganda Sites, Research Reveals

AI chatbots like ChatGPT and Perplexity have directed 300,000 users to Kremlin propaganda sites, raising urgent concerns over misinformation control.

New research highlights a troubling trend in the use of AI chatbots, revealing that platforms like ChatGPT, Perplexity, Claude, and Mistral are inadvertently directing significant user traffic to Russian state-aligned propaganda websites. This revelation comes as a growing challenge for moderating AI-generated content and enforcing information controls, particularly as these outlets are banned or restricted under European sanctions.

According to an analysis of SimilarWeb referral data for the fourth quarter of 2025, at least 300,000 visits to eight Kremlin-linked news platforms can be traced back to these AI assistants. Prominent among the referred domains are RT, Sputnik, RIA Novosti, and Lenta.ru, all of which are blacklisted in the European Union for disseminating disinformation and supporting Russia’s military efforts.

During the analyzed period, ChatGPT alone was responsible for sending 88,300 users to RT, while Perplexity contributed another 10,100 visits. Additionally, RIA Novosti received over 70,000 visits through AI platforms, and Lenta.ru logged more than 60,000. While these figures may seem minor compared to total traffic, they signal a growing reliance on AI systems as referral channels.

Smaller, region-specific pro-Kremlin outlets appear even more dependent on AI referrals. For instance, the banned site sputnikglobe.com registered 176,000 total visits, with a significant portion stemming from AI sources. In specific domains, up to 10% of referrals originated from AI chatbots, according to the findings.

Alarmingly, a considerable share of this traffic came from the European Union and the United States, despite access restrictions in those regions. This raises concerns that conversational AI systems may inadvertently present sanctioned sources as credible, thereby circumventing existing content restrictions and normalizing user engagement with these restricted outlets.

Unlike traditional search engines or social media platforms, AI chatbots embed links directly within their responses without visible labels or reliability warnings. This dynamic could fundamentally alter how users encounter state-aligned narratives through AI interfaces, as reported by Insight News Media.

The findings emphasize the urgent need for enhanced oversight of AI systems. Experts are calling for routine audits of AI outputs, stronger transparency requirements, and coordinated lists of restricted websites to prevent their inclusion in AI-generated responses. This is particularly crucial in contexts where state-linked misinformation poses significant security risks.

As the landscape of AI technology continues to evolve, the implications of these findings extend beyond user engagement with content. They highlight a critical juncture in the responsible development and deployment of AI tools, underscoring the necessity for regulatory frameworks that ensure the integrity of information and protect against the spread of harmful narratives.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Microsoft considers legal action against Amazon and OpenAI over a $50 billion deal that threatens its Azure exclusivity with OpenAI's Frontier product.

AI Regulation

AI tool Claude enhances legal research by translating ancient texts and summarizing citations, revolutionizing access to 18th Century legal frameworks.

Top Stories

CoreWeave secures a multi-year deal with Perplexity to support AI workloads, underscoring its rapid growth to $5 billion in annual revenue despite recent stock...

AI Generative

Generative AI users, including those leveraging OpenAI's ChatGPT, risk copyright liability as courts explore the legal implications of AI-generated content.

Top Stories

Anthropic appoints a dedicated safety manager to mitigate chemical and explosive risks, positioning itself as a leader in AI safety amid a projected $25B...

AI Education

Dartmouth's partnership with AI firm Anthropic faces backlash over its ties to Pentagon operations, as Claude's technology linked to 1,000 military strikes raises ethical...

AI Tools

Beeline Kyrgyzstan launches the Ukmush AI plan, offering 80GB of data and unlimited access to popular AI services like ChatGPT for KGS 500 monthly.

AI Education

Educators like Kristi Girdharry are redefining learning in the AI era by challenging students to exceed AI capabilities, revealing that reliance on tools like...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.