The rise of artificial intelligence (AI) in recent years has brought both innovation and challenges, especially in the realm of misinformation. One such instance emerged with a recent video circulating online, falsely attributed to a senior Indian army official, Lt. Gen. Rajiv Ghai (Retd), criticizing Prime Minister Narendra Modi and his ruling Bharatiya Janata Party (BJP). The video, initially shared on October 23, 2025, purportedly shows Ghai making serious allegations against the political climate in India, particularly around the concept of “saffronisation,” a term referring to the politicization of institutions in line with Hindu nationalist ideology. However, investigations have revealed that the clip is, in fact, an AI-generated fabrication.
In the 35-second clip, Ghai, dressed in full military uniform, appears to express deep concern over the increasing influence of saffron politics on the Indian Armed Forces. “As a senior officer who has devoted decades to the service of this uniform, I say this with deep concern. The growing influence of saffron politics is corroding the core values of the Indian Army,” the clip claims. The implications of this video are significant, especially given the ongoing debate about the intersection of military integrity and political ideology in India.
Fact-checkers, including Indian outlet Factly, have debunked the video, noting that Ghai never made such remarks. A detailed examination of the video reveals several indicators of artificial creation, including mismatched lip movements and anomalies in the graphic elements present in the original broadcast. Furthermore, an analysis using Hive’s AI detection tool estimated a 99.5 percent likelihood that the video was AI-generated. These findings underline the growing sophistication of AI technologies used in misinformation campaigns.
Context of Saffronisation and Political Climate in India
The term “saffronisation” has gained traction since Narendra Modi’s BJP came to power in 2014, reflecting a broader concern about the erosion of India’s secular values. Critics argue that various government policies, such as the Citizenship Amendment Act of 2019, discriminate against Muslim populations, thereby intensifying sectarian divides. As public discourse increasingly merges with digital platforms, the risk of misinformation spreading rapidly is a significant concern for both the military and the general populace.
See also
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual SupportPublic comments on social media platforms further demonstrate how easily misinformation can take root. Many users expressed beliefs that the fabricated video reflected genuine concerns about Modi’s leadership, with comments suggesting that the military should remain apolitical. Such reactions exemplify the potential for AI-generated content to influence public opinion and contribute to misinformation narratives.
Understanding AI’s Role in Misinformation
AI tools, like those used to create the misleading video, are becoming more accessible and sophisticated. This evolution raises pressing questions about the ethical use of AI and the responsibilities of platforms hosting such content. Recognizing AI-generated misinformation is critical to maintaining the integrity of public discourse. As AI technologies continue to develop, the implications for individuals and institutions become increasingly complex.
In light of this case, vigilance is necessary among consumers of digital media. Fact-checking organizations are stepping up their efforts to identify and counteract misinformation, but the technology facilitating such fabrications is evolving at an equally rapid pace. The incident serves as a stark reminder of the challenges posed by emerging technologies in the fight against misinformation.
As AI continues to advance, understanding its capabilities and limits will be essential for both consumers and creators of content. This case exemplifies the need for robust frameworks to deal with AI-generated misinformation, as the consequences can extend beyond simple misinformation to undermine the very pillars of democratic discourse.
For those interested in delving deeper into identifying AI-generated content, resources from organizations like AFP provide guides and tools to help navigate this complex landscape.

















































