A centuries-old industry, known for enhancing safety in automobiles and construction, is now turning its attention to the burgeoning field of artificial intelligence (AI). A select group of insurance companies, including startups and established firms, are pioneering specialized coverage aimed at mitigating the risks associated with AI agents—autonomous systems that are increasingly taking over roles traditionally held by human workers in sectors such as customer support, recruitment, and travel planning.
As these insurers explore a lucrative new market, they are betting on their potential to bring about regulation and standardization in a technology that remains in its infancy but is viewed as the next logical step in business evolution. Michael von Gablenz, head of the AI insurance division at Munich Re, a multinational insurance company, noted, “When we think about car insurance, for example, the broad adoption of the safety belt was really something driven by the demands of insurance.” He asserts that, similar to past technologies, insurance can play a pivotal role in fostering safety within AI.
The recent failures of generative AI have led to troubling headlines and legal disputes, with claims from families who assert the technology has caused harm to their loved ones. With AI tools becoming increasingly ubiquitous, users are faced with the dilemma of relying on self-regulation by companies or awaiting comprehensive government oversight. Some insurance providers are eager to step into this uncharted territory. While many traditional insurers remain hesitant to cover AI-related risks, a few companies are already offering specialized insurance, providing significant payouts in case of failures.
These insurers believe that introducing coverage can serve as a market-driven incentive for AI developers to enhance the safety of their products. Rajiv Dattani, co-founder of the Artificial Intelligence Underwriting Company (AIUC), emphasized that voluntary commitments from companies may not adequately address the risks posed by AI. He sees insurance as a “neat middle-ground solution,” offering a form of third-party oversight that doesn’t solely rely on governmental measures. “Insurers will be incentivized to track accurately: What are the losses? How are they happening? How frequent are they? How severe are they?” Dattani elaborated. “We think insurers, because they’re paying, will end up leading a lot of this research or at least funding a lot of it.”
See also
Karnataka Announces ₹1 Trillion AI Hub in Bidadi to Boost Tech EcosystemBusinesses utilizing AI agents face a wide array of risks, including data breaches, biased decision-making, legal liabilities, and reputational damage. For instance, an AI chatbot could leak sensitive customer data or make discriminatory hiring choices, while recent lawsuits have highlighted the potential for AI to encourage self-harm among vulnerable individuals. According to a survey by the Geneva Association, over 90% of businesses seek insurance protection against generative AI risks. However, the absence of auditable standards for AI safety leaves many insurers uncertain about how to provide this necessary coverage.
Establishing Standards for Emerging Risks
A recent report by Ernst & Young revealed that 99% of 975 surveyed businesses experienced financial losses due to AI-related risks, with nearly two-thirds reporting losses exceeding $1 million. Tech companies such as OpenAI, Anthropic, and Character.AI have faced significant legal challenges in recent years, highlighting the urgent need for industry-specific insurance solutions.
In July, AIUC launched the world’s first certification for AI agents, designed to establish an auditable benchmark for evaluating agent vulnerabilities. This certification, termed AIUC-1, encompasses six key areas: security, safety, reliability, data privacy, accountability, and societal risks. Companies can voluntarily undergo testing against this standard to enhance customer trust and provide insurers with a means to assess whether an AI product meets their insurability criteria.
“We’re in an era now where the losses are really happening; that’s one thing. The second thing is that insurers are now actually starting to exclude AI from their existing policies,” Dattani stated. “It feels pretty certain that we’re going to need some solution here, and we need people with skin in the game who can provide third-party oversight. That’s where we see the role of insurance.”
AIUC’s insurance policies currently cover losses up to $50 million caused by AI agents, including hallucinations, intellectual property infringement, and data leaks. Dattani draws inspiration from Benjamin Franklin’s early fire risk mitigation strategies, which laid the groundwork for modern building safety standards.
Increasing Demand Amid Caution
In April, the Toronto-based AI risk assessment company Armilla began offering specialized insurance for customers employing AI agents, providing comprehensive coverage for performance shortcomings, legal liabilities, and financial risks tied to large-scale AI adoption. CEO Karthik Ramakrishnan reported a surge in demand from a diverse range of industries, stating, “AI is one of the most democratic technologies. It’s getting adopted by every type of company, every type of domain, from retail, manufacturing, banking.”
While some insurers have opted to exclude AI coverage due to uncertainty, Armilla is leveraging its expertise in evaluating AI vulnerabilities to fill this critical gap. Ramakrishnan predicts that, within a few years, the AI insurance market could thrive, with estimates suggesting it may reach $4.8 billion by 2032—though he believes this figure may actually be an underestimation.
As the landscape of AI continues to evolve, the role of insurance in providing a safety net for businesses utilizing AI technology appears increasingly vital. With ongoing developments and growing awareness of associated risks, the conversation around AI liability and insurance is only beginning to unfold.


















































