As military tensions escalate between the United States, Israel, and Iran, a surge of fabricated videos, recycled footage, and artificial intelligence (AI)-generated images has flooded social media platforms. This alarming trend, highlighted by fact-checkers, signifies a troubling new chapter in modern conflict driven by AI-enhanced misinformation.
The conflict began on February 28, 2026, under the joint campaign codenamed Operation Roaring Lion by Israel and Operation Epic Fury by the United States. The military operations targeted various Iranian cities, including Tehran, Isfahan, Qom, Karaj, and Kermanshah, aiming at military facilities, officials, and critical infrastructure. The ensuing strikes and Iran’s retaliatory responses have ignited an information crisis online, as the demand for real-time footage has outstripped the capabilities of platforms and journalists to verify authenticity.
Fact-checkers have uncovered numerous manipulated clips that falsely depict explosions in Tel Aviv. One notable video, which claimed to show a recent attack, was actually footage from a 2015 chemical warehouse fire in Tianjin, China. Another clip purporting to show Iranian missiles striking Israel was dated back to an October 2024 incident. The proliferation of AI-generated imagery has compounded the issue, with fabricated scenes of destroyed infrastructures and fictitious protests circulating widely on social media in Persian, Urdu, Arabic, and Western languages.
One such post falsely asserting that Iranian forces had struck the Burj Khalifa in Dubai amassed over 2.2 million views before being debunked. Analysts observed telltale AI distortions in the imagery, including peculiar limb shapes that hinted at its artificial origins. Additionally, AI-generated images portraying rescuers recovering the body of Iranian Supreme Leader Ali Khamenei gained traction, even among prominent public figures.
Nikita Bier, head of product at X, revealed that investigators have traced a single operator in Pakistan managing 31 hacked accounts, all renamed to variations of “Iran War Monitor” just a day before the conflict escalated. This operator used these accounts to disseminate AI-generated war videos. In response to this rising tide of misinformation, X is ramping up detection efforts and curtailing financial incentives for those sharing fabricated content. Under a new policy, users who repeatedly post AI-generated conflict footage without proper labeling will face suspension from X’s revenue-sharing program, with chronic violators facing permanent bans from earning.
A BBC Verify journalist remarked that the current U.S.-Israel-Iran confrontation could produce the most AI-generated viral videos of any conflict to date. Analysts warn that as the prevalence of AI-generated disinformation escalates, public trust is increasingly undermined. Authentic footage is often dismissed as fabricated when it contradicts viewers’ pre-existing beliefs. This trend underscores a significant challenge for journalists, policymakers, and the public, as the digital information landscape becomes as contested as the physical battlefield itself.
See also
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative
OpenAI Enhances ChatGPT with Em-Dash Personalization Feature



















































