Several major insurance companies are seeking to redefine their coverage policies by excluding liabilities related to the use of artificial intelligence (AI) tools. According to a report from the Financial Times, companies like AIG, Great American, and WR Berkley have recently approached U.S. regulators to obtain permission to implement exclusions for AI-related risks in their corporate policies.
This initiative comes amid a surge in AI adoption across various businesses, which has resulted in significant issues, particularly with AI “hallucinations”—a phenomenon where AI outputs deviate from reality, leading to potentially costly errors. For instance, WR Berkley is looking to prohibit claims involving “any actual or alleged use” of AI, encompassing products or services from companies that incorporate AI technologies.
AIG has also expressed concerns regarding the growth of generative AI, labeling it a “wide-ranging technology.” The company indicated that the likelihood of events triggering future claims is expected to rise. While AIG has filed for generative AI exclusions, it clarified that it “has no plans to implement them at this time.” However, gaining approval for these exclusions could provide the company with flexibility to enforce them in the future.
Dennis Bertram, head of cyber insurance for Europe at Mosaic, pointed out that insurers view the outputs of AI as increasingly uncertain, considering them “too much of a black box.” Although Mosaic covers certain types of AI-enhanced software, it has refrained from underwriting risks associated with large language models (LLMs), such as OpenAI‘s ChatGPT.
Rajiv Dattani, co-founder of the Artificial Intelligence Underwriting Company, a startup specializing in AI insurance and auditing, raised critical questions about liability in AI usage: “Nobody knows who’s liable if things go wrong.” This ongoing uncertainty is exacerbated by the fact that businesses using AI technologies often bear the consequences of errors. For example, Virgin Money had to apologize when its chatbot reprimanded a customer over the term “virgin,” while Air Canada faced legal repercussions when its chatbot incorrectly fabricated a discount for a potential passenger.
As the adoption of AI becomes more prevalent, the ramifications of erroneous outputs can be severe, resulting in flawed decisions, financial losses, and damage to reputation. Discussions around accountability are becoming increasingly critical. The question arises: If a human delegates responsibility to AI, who is ultimately accountable for any mistakes made? Kelwin Fernandes, CEO of NILG.AI, emphasized this dilemma earlier this year, highlighting the complexities involved when human oversight is removed from the decision-making process.
Insurers’ hesitance to cover AI-related risks illustrates a significant shift in the insurance landscape, as they grapple with the rapid evolution of AI technologies and their implications. The challenges of accurately assessing AI risks, coupled with the potential for substantial liabilities, have prompted insurance companies to take a cautious approach.
As this trend continues, it will be essential for businesses leveraging AI to understand the evolving insurance landscape and the importance of compliance and risk management regarding their use of these technologies. Insurers and businesses alike will need to navigate these uncharted waters with careful consideration to mitigate risks associated with AI more effectively.
IAB Australia Launches LLM Prompting Guide to Enhance Marketing AI Strategies
Amy Redford Criticizes AI-Generated Tributes, Urges Transparency in Mourning Practices
Google’s Gemini 3 Launches with Unmatched Multimodal Capabilities, Surpassing Competitors
SoulGen Launches 2.0 with 38.2% Improvement in Human Motion Accuracy and 73.7% in Color Fidelity
World Labs Launches Marble AI, Transforming 3D World Creation for Gaming and Design



















































