Artificial intelligence (AI) generated videos featuring young girls in revealing clothing have attracted millions of likes and shares on TikTok, despite the platform’s policies prohibiting such content, according to a report by the fact-checking organization Maldita in Spain.
Maldita identified over 20 accounts that collectively published more than 5,200 videos showcasing young girls in bikinis, school uniforms, and tight clothing. These accounts have amassed a following of over 550,000 users and garnered nearly 6 million likes. The report raises concerns about the proliferation of harmful content, as the comments on these videos contained links to external sites, including Telegram communities that are known to sell child pornography. The organization reported the 12 Telegram groups discovered in its analysis to the Spanish police.
The accounts exploit TikTok’s subscription service to generate profit by selling AI-generated videos and images. According to TikTok’s agreement with creators, the platform retains approximately 50 percent of the profits generated through this model.
This alarming report emerges amid global discussions about the safety of young users on social media platforms. Countries such as Australia, Denmark, and members of the European Union are either implementing or contemplating restrictions on social media usage for individuals under 16 years of age.
TikTok mandates that content creators disclose when AI has been utilized in their videos. The platform’s community guidelines state that content deemed “harmful to individuals” can be removed. However, Maldita’s findings indicated that most of the analyzed videos lacked watermarks or any identification indicating that AI was involved in their creation. Some videos did bear a “TikTok AI Alive” watermark, which is automatically applied to images converted into videos using the platform’s tools.
Both Telegram and TikTok have stated their commitment to countering child sexual abuse material on their platforms. Telegram asserts that it scans all uploaded media against previously removed child sexual abuse materials to prevent their dissemination. In a statement, Telegram highlighted that the necessity for criminals to utilize private groups and external algorithms as a means of growth underscores the effectiveness of its moderation systems.
In 2025, Telegram reported removing over 909,000 groups and channels that contained child sexual abuse materials. TikTok claims that 99 percent of content harmful to minors is automatically removed, and that 97 percent of offending AI-generated content is also removed proactively. The platform takes immediate action to suppress or disable accounts that disseminate sexually explicit content involving children and reports such incidents to the United States’ National Center for Missing and Exploited Children (NCMEC).
In a recent statement to CNN, TikTok disclosed that, between April and June 2025, it removed more than 189 million videos and banned upwards of 108 million accounts for violating its policies.
The findings from the Maldita report underscore an urgent need for enhanced oversight and regulation of content on social media platforms, especially concerning the protection of children and the prevention of exploitation. As governments consider stricter measures, the effectiveness of current community guidelines and platform moderation practices will come under increasing scrutiny.
See also
China’s Tech Giants Launch AI Chatbots, Narrowing Gap with US Post-ChatGPT
Alphabet’s Strong Q3 Growth and AI Innovations Position It to Dominate Market by 2026
TD SYNNEX Launches AI Game Plan Workshop; Shares Undervalued at $155.66 vs. $178.36 Fair Value
Search Atlas Launches Free AI Search Summit to Optimize Content for 2025 and Beyond



















































