As the use of deepfake technology grows, it is increasingly being offered as a service by autonomous AI systems capable of executing sophisticated fraud schemes. These schemes include synthetic job candidates successfully navigating live video interviews and romance scams that deplete victims’ retirement accounts. The rise of deepfakes presents significant challenges for businesses, not only in terms of content moderation, but also in vendor risk management, incident response, and insurance coverage considerations.
Since 2022, 46 states have enacted deepfake legislation, culminating in the federal TAKE IT DOWN Act, which became law in May 2025. Additionally, the EU AI Act, which mandates transparency requirements, is set to take effect in August 2026. This legal landscape has created a fragmented system that requires companies to develop jurisdiction-specific compliance strategies.
The threat landscape has evolved markedly. For instance, engineering firm Arup lost $25 million in January 2024 due to an employee unknowingly participating in a video call with a deepfaked CFO and other AI-generated colleagues, who authorized 15 wire transfers before the scam was detected. According to Experian’s 2026 Fraud Forecast, deepfakes that “outsmart HR” are a top emerging threat, with synthetic job candidates demonstrating the capability to pass interviews in real time. Notably, Pindrop Security reported that over one-third of the 300 job applicant profiles analyzed were entirely fabricated, featuring AI-generated resumes alongside deepfake video interviews.
These alarming trends are underscored by statistics from Gartner, which projects that one in four job candidate profiles globally will be fake by 2028. Meanwhile, Deloitte estimates that generative AI could lead to $40 billion in fraud losses in the United States by 2027.
In response to these threats, state legislatures have enacted a total of 169 laws since 2022, with 146 new bills introduced in 2025 alone. Political deepfakes are subject to the strictest scrutiny; for instance, Texas’s Election Code criminalizes the creation of deepfake videos within 30 days of elections, although some provisions have faced constitutional challenges. Minnesota has extended this coverage to 90 days before a political party convention, imposing escalating felony penalties for repeat offenses. Virginia and Tennessee have also introduced laws addressing the misuse of intimate imagery and voice rights in the context of AI.
The EU AI Act, effective from August 2, 2026, establishes a comprehensive regulatory framework that requires providers to ensure AI-generated content is easily identifiable as such. Violations could lead to penalties of up to €15 million or 3% of a company’s global turnover. Federally, the TAKE IT DOWN Act criminalizes the publication of non-consensual intimate deepfakes, imposing penalties up to two years in prison and requiring platforms to remove such content within 48 hours of valid takedown notices.
Despite the regulatory advancements, significant gaps remain in insurance coverage for deepfake-enabled fraud. The “voluntary parting” exclusion found in standard crime and fidelity policies poses a major barrier, as coverage typically does not apply when deceived employees authorize transactions, even if induced by sophisticated impersonation. The introduction of Coalition’s Deepfake Response Endorsement in December 2025 marks the first explicit coverage for incidents involving deepfakes, which includes legal support and crisis communications. Nevertheless, many firms remain exposed, with reports from Swiss Re warning that deepfakes could increasingly facilitate sophisticated cyberattacks, driving up cyber insurance losses.
To mitigate these risks, businesses should consider acquiring explicit social engineering fraud endorsements, as the typical sublimits of $100,000 to $250,000 are increasingly seen as inadequate for losses associated with AI-scale fraud. Organizations must also negotiate coverage to ensure that voluntary parting exclusions do not apply to payments induced by deepfake impersonation and that definitions of computer fraud explicitly encompass AI-generated synthetic media.
Emerging industry standards will likely shape legal expectations regarding governance. The Coalition for Content Provenance and Authenticity (C2PA) standard, supported by tech giants including Adobe, Microsoft, Google, and OpenAI, is advancing toward international standardization and aims to provide cryptographic provenance tracking of content. Google’s SynthID has already watermarked over 10 billion pieces of content, employing pixel-level signals designed to endure compression and editing. Organizations that fail to implement such authentication technologies may face increased negligence claims following deepfake-enabled fraud, particularly as standards become more widely adopted.
As organizations grapple with these evolving risks, immediate compliance actions are crucial. This includes conducting vendor due diligence on all AI tools capable of generating synthetic content, implementing multi-factor authentication for sensitive financial transactions, and developing deepfake-specific incident response plans. The growing regulatory and risk landscape surrounding deepfakes underscores the urgent need for businesses to adapt and protect themselves from the potential fallout of this technology.















































