Connect with us

Hi, what are you looking for?

AI Generative

Insurers AIG and WR Berkley Seek Exclusions for Corporate AI Risk Coverage

AIG and WR Berkley seek regulatory approval to exclude AI-related risks from corporate coverage, amid rising concerns over costly AI ‘hallucinations’ and liability uncertainty.

Several major insurance companies are seeking to redefine their coverage policies by excluding liabilities related to the use of artificial intelligence (AI) tools. According to a report from the Financial Times, companies like AIG, Great American, and WR Berkley have recently approached U.S. regulators to obtain permission to implement exclusions for AI-related risks in their corporate policies.

This initiative comes amid a surge in AI adoption across various businesses, which has resulted in significant issues, particularly with AI “hallucinations”—a phenomenon where AI outputs deviate from reality, leading to potentially costly errors. For instance, WR Berkley is looking to prohibit claims involving “any actual or alleged use” of AI, encompassing products or services from companies that incorporate AI technologies.

AIG has also expressed concerns regarding the growth of generative AI, labeling it a “wide-ranging technology.” The company indicated that the likelihood of events triggering future claims is expected to rise. While AIG has filed for generative AI exclusions, it clarified that it “has no plans to implement them at this time.” However, gaining approval for these exclusions could provide the company with flexibility to enforce them in the future.

Dennis Bertram, head of cyber insurance for Europe at Mosaic, pointed out that insurers view the outputs of AI as increasingly uncertain, considering them “too much of a black box.” Although Mosaic covers certain types of AI-enhanced software, it has refrained from underwriting risks associated with large language models (LLMs), such as OpenAI‘s ChatGPT.

Rajiv Dattani, co-founder of the Artificial Intelligence Underwriting Company, a startup specializing in AI insurance and auditing, raised critical questions about liability in AI usage: “Nobody knows who’s liable if things go wrong.” This ongoing uncertainty is exacerbated by the fact that businesses using AI technologies often bear the consequences of errors. For example, Virgin Money had to apologize when its chatbot reprimanded a customer over the term “virgin,” while Air Canada faced legal repercussions when its chatbot incorrectly fabricated a discount for a potential passenger.

As the adoption of AI becomes more prevalent, the ramifications of erroneous outputs can be severe, resulting in flawed decisions, financial losses, and damage to reputation. Discussions around accountability are becoming increasingly critical. The question arises: If a human delegates responsibility to AI, who is ultimately accountable for any mistakes made? Kelwin Fernandes, CEO of NILG.AI, emphasized this dilemma earlier this year, highlighting the complexities involved when human oversight is removed from the decision-making process.

Insurers’ hesitance to cover AI-related risks illustrates a significant shift in the insurance landscape, as they grapple with the rapid evolution of AI technologies and their implications. The challenges of accurately assessing AI risks, coupled with the potential for substantial liabilities, have prompted insurance companies to take a cautious approach.

As this trend continues, it will be essential for businesses leveraging AI to understand the evolving insurance landscape and the importance of compliance and risk management regarding their use of these technologies. Insurers and businesses alike will need to navigate these uncharted waters with careful consideration to mitigate risks associated with AI more effectively.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

MindWalk Holdings identifies a key functional constraint across influenza viruses, enhancing vaccine design amid a 25-year high in U.S. flu cases.

AI Regulation

Omer Tene warns that by 2026, the surge in state privacy laws and EU reforms will necessitate proactive compliance strategies for multinational companies.

AI Technology

As U.S. data center electricity demand is projected to exceed 426 TWh by 2030, experts warn that energy bottlenecks could hinder America's AI leadership...

AI Technology

China launches an investigation into Meta's $1 billion acquisition of AI startup Manus, reflecting escalating U.S.-China tech rivalry and compliance concerns.

AI Regulation

Trump's executive order targets state AI regulations, directing the attorney general to challenge 38 laws that hinder innovation, particularly in AI safety and transparency.

Top Stories

As millions of Americans lose ACA healthcare subsidies, a survey reveals that 60% are turning to OpenAI's ChatGPT for crucial medical guidance.

AI Generative

Yann LeCun criticizes Meta's focus on LLMs as a 'dead end' for superintelligence, advocating for advanced world models that leverage video and spatial data.

Top Stories

The Federal Reserve's quarter-point rate cut amid a booming AI sector highlights a paradox of 4.3% GDP growth masking deepening inequality and rising debt...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.