A few days before Nigeria’s 2023 election, an audio recording allegedly featuring Atiku Abubakar, former vice president; Aminu Tambuwal, former governor of Sokoto State; and Ifeanyi Okowa, former governor of Delta State, purportedly planning to rig the election made waves across social media. The recording prompted widespread outrage among Nigerians, urging the Independent National Electoral Commission (INEC) to intervene. However, fact-checkers at TheCable later confirmed the audio was a deepfake. By that time, the clip had already spread extensively, highlighting the potential of synthetic media to disrupt political discourse during an election season.
As the next presidential election looms in January 2027, the threat of artificial intelligence (AI) in Nigeria’s electoral landscape appears more pronounced than it was in 2023. The previous election was marred by rampant misinformation, including the use of AI-generated content to promote favored candidates. In November 2022, just months before the election, a manipulated video featuring Hollywood celebrities endorsing Peter Obi went viral, followed by fabricated clips of notable figures like Elon Musk and Donald Trump allegedly supporting him.
The adoption of AI in Nigeria has surged, with a recent survey conducted by Google and Ipsos indicating that 88% of Nigerian adults have interacted with AI chatbots, and 39% utilize AI frequently in their daily lives or work. This rapid uptake can be attributed to the accessibility of AI tools, many of which are available for free or at minimal cost. With 142 million Nigerians having internet access and 85% possessing smartphones, the potential for AI to shape political conversations is significant.
Experts caution that while AI tools can enhance communication in Nigeria’s diverse electoral landscape—home to over 250 ethnic groups—there are serious risks associated with their use. According to Kola Ijasan, Research Director at Research ICT Africa, AI can improve voter education and participation but also poses threats during elections if left unregulated. Synthetic media, including AI-generated videos and audio, can fabricate misleading content, such as false campaign messages, fake endorsements, and even faked polling results.
Mayowa Tijani, a journalist and fact-checker, notes that the key development since the 2023 elections is not merely the existence of AI-generated content but its enhanced quality and accessibility. While earlier iterations of deepfake technology were detectable, advancements mean that even seasoned fact-checkers may struggle to discern real from manipulated content. “We can expect the kind of AI use we experienced in 2023, but the sophistication has gotten a lot better,” Tijani says. “The real risk lies in the potential for AI-generated results that could be indistinguishable from legitimate documents.”
Despite the potential for misuse, AI researchers acknowledge that producing high-quality deepfakes still involves challenges. Ayomide Odumakinde, an AI researcher, emphasizes that many leading-edge tools require technical expertise and often are behind paywalls. “Most of the high-quality tools that can be used for video deepfakes are locked behind subscriptions,” Odumakinde explains. “Audio deepfake tools are cheaper, and the output is more difficult to detect.”
However, even if high-quality production remains a barrier for many, those with resources and technical skills can exploit AI to spread misinformation effectively. Nigeria’s media landscape is particularly vulnerable due to low media literacy and the polarization of political discourse. This creates an environment ripe for the rapid dissemination of AI-generated misinformation, especially through platforms like WhatsApp, where unverified content can circulate widely without accountability.
As Tijani notes, detecting AI-generated media poses unique challenges. While tools like X’s AI chatbot Grok offer some fact-checking capabilities, they cannot keep pace with the volume of misinformation that AI can generate. “The problem is that when this content is forwarded to platforms like WhatsApp, where no one can verify or track its movement, it becomes even more dangerous,” Tijani adds.
The global political landscape has already witnessed the disruptive potential of AI, with several countries using synthetic media in recent elections. In India, approximately $50 million was reportedly spent on AI-generated content during the general election, while in Pakistan, AI-generated speeches were used to simulate addresses from a detained political leader. In the U.S., AI was utilized in deceptive robocalls, mimicking officials’ voices to mislead voters.
Looking ahead, Ijasan emphasizes that the main concern is not merely the difficulty for voters to distinguish truth from fiction but the erosion of trust in the electoral process itself. As the countdown to the 2027 election begins, experts urge immediate measures to counter misinformation. Early funding for fact-checking initiatives, outreach beyond digital platforms, and proper labeling of AI-generated content in political campaigns are essential to mitigating risks. “Once citizens start questioning the validity of any information, including legitimate findings, democracy itself becomes fragile,” Ijasan warns.
See also
Deepfake Video Misattributes Statements to Indian Army Chief on Iran Ship Incident
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative





















































