New research highlights a troubling trend in the use of AI chatbots, revealing that platforms like ChatGPT, Perplexity, Claude, and Mistral are inadvertently directing significant user traffic to Russian state-aligned propaganda websites. This revelation comes as a growing challenge for moderating AI-generated content and enforcing information controls, particularly as these outlets are banned or restricted under European sanctions.
According to an analysis of SimilarWeb referral data for the fourth quarter of 2025, at least 300,000 visits to eight Kremlin-linked news platforms can be traced back to these AI assistants. Prominent among the referred domains are RT, Sputnik, RIA Novosti, and Lenta.ru, all of which are blacklisted in the European Union for disseminating disinformation and supporting Russia’s military efforts.
During the analyzed period, ChatGPT alone was responsible for sending 88,300 users to RT, while Perplexity contributed another 10,100 visits. Additionally, RIA Novosti received over 70,000 visits through AI platforms, and Lenta.ru logged more than 60,000. While these figures may seem minor compared to total traffic, they signal a growing reliance on AI systems as referral channels.
Smaller, region-specific pro-Kremlin outlets appear even more dependent on AI referrals. For instance, the banned site sputnikglobe.com registered 176,000 total visits, with a significant portion stemming from AI sources. In specific domains, up to 10% of referrals originated from AI chatbots, according to the findings.
Alarmingly, a considerable share of this traffic came from the European Union and the United States, despite access restrictions in those regions. This raises concerns that conversational AI systems may inadvertently present sanctioned sources as credible, thereby circumventing existing content restrictions and normalizing user engagement with these restricted outlets.
Unlike traditional search engines or social media platforms, AI chatbots embed links directly within their responses without visible labels or reliability warnings. This dynamic could fundamentally alter how users encounter state-aligned narratives through AI interfaces, as reported by Insight News Media.
The findings emphasize the urgent need for enhanced oversight of AI systems. Experts are calling for routine audits of AI outputs, stronger transparency requirements, and coordinated lists of restricted websites to prevent their inclusion in AI-generated responses. This is particularly crucial in contexts where state-linked misinformation poses significant security risks.
As the landscape of AI technology continues to evolve, the implications of these findings extend beyond user engagement with content. They highlight a critical juncture in the responsible development and deployment of AI tools, underscoring the necessity for regulatory frameworks that ensure the integrity of information and protect against the spread of harmful narratives.
See also
Tesseract Launches Site Manager and PRISM Vision Badge for Job Site Clarity
Affordable Android Smartwatches That Offer Great Value and Features
Russia”s AIDOL Robot Stumbles During Debut in Moscow
AI Technology Revolutionizes Meat Processing at Cargill Slaughterhouse
Seagate Unveils Exos 4U100: 3.2PB AI-Ready Storage with Advanced HAMR Tech

















































