Connect with us

Hi, what are you looking for?

AI Generative

AI-Driven Propaganda Rises as X Policies Target Undisclosed Synthetic War Videos

X revises creator policy to combat AI-generated misinformation in war videos, risking monetization and bans for creators who fail to disclose synthetic content.

Social media platform X has announced a revision to its creator policy aimed at combating the spread of AI-generated misinformation, specifically targeting war-related videos that are not clearly labeled as artificial. This move comes amid growing concerns among experts about the sophistication of AI-generated propaganda, which has shifted from broad broadcasts to more targeted psychological manipulation.

As the capabilities of artificial intelligence evolve, creators who fail to disclose the artificial nature of their content may face severe consequences, including the loss of monetization privileges and potential bans from X’s creator revenue sharing program. “Historically, propaganda was amplified through television debates, newspapers, or mass forwards on messaging platforms. AI has transformed propaganda from a loud broadcast into a personalised whisper,” noted Kartik Gupta, an instructor in AI and Machine Learning at the Newton School of Technology.

Modern AI systems can analyze an individual’s behavioral data, linguistic patterns, and social engagement signals, allowing them to generate tailored narratives that resonate with a person’s cultural background, religious identity, or political leanings. This capability raises significant concerns, particularly as a study conducted in late 2025 revealed that only a small fraction of participants could distinguish between real and AI-generated content. This difficulty underscores the potential for AI-driven propaganda to erode trust before manipulation is recognized.

Atul Rai, co-founder and CEO of Staqu Technologies, emphasizes the responsibility of social media platforms in curbing the distribution of AI-generated misinformation. “Social media platforms serve as the primary distribution infrastructure through which AI-generated misinformation spreads,” he stated. Given their technological capabilities, Rai argues that these platforms must deploy advanced AI systems to identify manipulated content, including deepfakes and synthetic media.

“Deepfakes and synthetic media gain traction because platform algorithms prioritize engagement, allowing manipulated content to reach large audiences,” Rai added. He stressed the need for stronger governance frameworks, including rapid escalation protocols during geopolitical crises and transparent labeling of AI-generated content, along with partnerships with fact-checking organizations.

Accountability does not rest solely with platforms, however. Kaushal Bheda, director at Pelorus Technology, contends that creators who produce disinformation or propaganda content are directly responsible for any harm caused. Moreover, platform developers must implement preventative techniques and respond swiftly to law enforcement requests. When authorities identify ongoing harmful campaigns, any delays in data provision or account suspension could exacerbate the damage. Immediate cooperation with investigations and proactive intelligence sharing are considered essential responsibilities for platforms operating on a global scale.

Industry Response

Gupta further warned that society is entering an era where authenticity cannot be assumed. He advocates for systemic verification processes rather than relying on individual evaluations. Governments, educational institutions, and platforms must establish stronger early-warning systems and authentication protocols, especially during high-risk periods like elections or natural disasters. “There may be difficult debates ahead around temporary amplification controls during national emergencies. While controversial, such measures reflect a broader tension between open digital ecosystems and public safety,” he said.

Concerns over the rapid dissemination of misinformation extend to the mechanisms used to address complaints about harmful content. Garry Singh, president of IIRIS, pointed out that while many large platforms have implemented methods to identify AI-generated propaganda, the key issue remains the exploitation of this content by malicious actors. “The mechanism to address complaints and remove bad content is slow, causing concerns of spreading before the content can be taken down,” Singh explained. He added that the spectrum of risks from false emergencies is vast, potentially leading to life safety issues, financial loss, resource depletion, and the propagation of biased opinions.

As AI technology continues to advance, the interplay between digital platforms and misinformation will likely shape public discourse and societal trust. The revisions to X’s creator policy may serve as a crucial step in addressing these challenges, but the effectiveness of such measures will depend on the collective responsibility of creators, developers, and platforms to mitigate the risks associated with AI-driven content.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

China's Supreme Court is drafting crucial judicial rules to clarify AI and data rights, potentially transforming compliance for businesses amid rapid tech advancements.

AI Cybersecurity

CrowdStrike reports AI has slashed cyberattack breakout time to just 29 minutes, highlighting a 65% speed increase and alarming rise in AI-driven threats.

AI Regulation

Federal judge orders Amazon's legal team to clarify generative AI use in class action errors, spotlighting critical consumer protection concerns.

AI Regulation

Governments worldwide are accelerating digital sovereignty initiatives to mitigate risks from cloud and AI vulnerabilities, as Info-Tech reveals significant control gaps in public sector...

AI Cybersecurity

Cloudflare's 2026 Threat Report reveals 230 billion daily cyber threats, highlighting an unprecedented rise in AI-driven attacks that demand urgent cybersecurity action.

AI Tools

Library vendors like EBSCO and ScienceDirect are integrating AI tools, enhancing research efficiency with features like natural language searching and article summarizers.

Top Stories

Telecom operators must swiftly adopt AI-driven network management to unlock new revenue streams and enhance operational efficiency, or face obsolescence.

AI Education

OpenAI unveils the Learning Outcomes Measurement Suite to assess ChatGPT's impact on education, focusing on cognitive skills and long-term benefits for students.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.