Connect with us

Hi, what are you looking for?

AI Generative

Insurers AIG and WR Berkley Seek Exclusions for Corporate AI Risk Coverage

AIG and WR Berkley seek regulatory approval to exclude AI-related risks from corporate coverage, amid rising concerns over costly AI ‘hallucinations’ and liability uncertainty.

Several major insurance companies are seeking to redefine their coverage policies by excluding liabilities related to the use of artificial intelligence (AI) tools. According to a report from the Financial Times, companies like AIG, Great American, and WR Berkley have recently approached U.S. regulators to obtain permission to implement exclusions for AI-related risks in their corporate policies.

This initiative comes amid a surge in AI adoption across various businesses, which has resulted in significant issues, particularly with AI “hallucinations”—a phenomenon where AI outputs deviate from reality, leading to potentially costly errors. For instance, WR Berkley is looking to prohibit claims involving “any actual or alleged use” of AI, encompassing products or services from companies that incorporate AI technologies.

AIG has also expressed concerns regarding the growth of generative AI, labeling it a “wide-ranging technology.” The company indicated that the likelihood of events triggering future claims is expected to rise. While AIG has filed for generative AI exclusions, it clarified that it “has no plans to implement them at this time.” However, gaining approval for these exclusions could provide the company with flexibility to enforce them in the future.

Dennis Bertram, head of cyber insurance for Europe at Mosaic, pointed out that insurers view the outputs of AI as increasingly uncertain, considering them “too much of a black box.” Although Mosaic covers certain types of AI-enhanced software, it has refrained from underwriting risks associated with large language models (LLMs), such as OpenAI‘s ChatGPT.

Rajiv Dattani, co-founder of the Artificial Intelligence Underwriting Company, a startup specializing in AI insurance and auditing, raised critical questions about liability in AI usage: “Nobody knows who’s liable if things go wrong.” This ongoing uncertainty is exacerbated by the fact that businesses using AI technologies often bear the consequences of errors. For example, Virgin Money had to apologize when its chatbot reprimanded a customer over the term “virgin,” while Air Canada faced legal repercussions when its chatbot incorrectly fabricated a discount for a potential passenger.

As the adoption of AI becomes more prevalent, the ramifications of erroneous outputs can be severe, resulting in flawed decisions, financial losses, and damage to reputation. Discussions around accountability are becoming increasingly critical. The question arises: If a human delegates responsibility to AI, who is ultimately accountable for any mistakes made? Kelwin Fernandes, CEO of NILG.AI, emphasized this dilemma earlier this year, highlighting the complexities involved when human oversight is removed from the decision-making process.

Insurers’ hesitance to cover AI-related risks illustrates a significant shift in the insurance landscape, as they grapple with the rapid evolution of AI technologies and their implications. The challenges of accurately assessing AI risks, coupled with the potential for substantial liabilities, have prompted insurance companies to take a cautious approach.

As this trend continues, it will be essential for businesses leveraging AI to understand the evolving insurance landscape and the importance of compliance and risk management regarding their use of these technologies. Insurers and businesses alike will need to navigate these uncharted waters with careful consideration to mitigate risks associated with AI more effectively.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Corning and Meta begin a $6B partnership to expand optical cable production in North Carolina, boosting U.S. manufacturing and AI infrastructure growth.

AI Regulation

White House unveils AI framework to preempt state regulations, gaining bipartisan support from leaders like Mike Johnson and Ted Cruz to bolster industry growth.

AI Generative

Synthetic media's rise amid U.S.-Israel-Iran tensions fuels disinformation, complicating conflict narratives and undermining public trust in media accuracy

Top Stories

DeepSeek trains its latest AI model on Nvidia's banned Blackwell chips, revealing critical loopholes in U.S. export controls amid rising China-U.S. tech tensions

Top Stories

Mistral AI secures €1.7 billion funding, positioning itself as Europe's leading generative AI player with a valuation between $6 billion and $14 billion.

AI Cybersecurity

LeoLabs launches Delta, an AI-powered platform enhancing space security and threat detection with real-time monitoring for U.S. and Allied operators.

AI Regulation

Colorado becomes the first U.S. state to regulate high-risk AI in employment decisions with the Colorado Artificial Intelligence Act, effective February 1, 2026.

AI Technology

DeepSeek delays the V4 AI model launch amid speculation over its reliance on Huawei chips, raising stakes for China's tech independence amid U.S. restrictions.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.