As artificial intelligence (AI) becomes increasingly integral to business decision-making, the implications for accountability, liability, and insurability are shifting from theoretical discussions to pressing realities. For risk managers, insurers, and corporate leaders, the focus has now transitioned to how these new risks should be understood, governed, and transferred, especially as legal, regulatory, and technical frameworks continue to evolve. This dialogue, stemming from a conversation with Professor Anat Lior as part of Willis Towers Watson’s research into AI liability, underscores the urgency for organizations navigating the complexities of AI-related harm.
The research reveals that the landscape of AI risk is still in flux, with traditional insurance frameworks struggling to categorize these emerging threats. While some insurers exhibit caution—citing a lack of claims data and relying on existing technology or cyber policies—others, particularly startups and innovative departments, are actively developing AI-focused solutions. This divergence indicates that the insurance market is still experimenting, leading to coverage gaps and ambiguities, especially for novel AI applications beyond established sectors like autonomous vehicles.
Alongside these challenges, the regulatory environment for AI remains fragmented and rapidly evolving. The anticipated EU AI Act is expected to significantly alter compliance expectations, yet the practical implications for enforcement and insurance are still uncertain. In the United States, the absence of unified regulation compounds this uncertainty, leaving risk managers to monitor legislative developments and insurer responses closely. Shifting regulations could quickly alter liabilities and policy requirements, necessitating vigilant engagement from businesses.
Amidst this backdrop, traditional actuarial models are proving inadequate to fully capture the unique risks associated with AI, particularly as novel technologies and use cases emerge. Although some risks can still be evaluated using existing methodologies, others—such as those stemming from generative AI or agentic systems—demand fresh perspectives. The conversation highlights that emerging guarantee policies, which focus on performance failures rather than accident-based liabilities, are beginning to surface as one potential response to these challenges. Risk managers are encouraged to reassess whether their existing policies adequately address the unique traits of AI risk or if tailored solutions are required.
The landscape of litigation is also evolving, with high-profile cases concerning generative AI and copyright influencing underwriting and policy design. Insurers are becoming increasingly aware of litigation outcomes, which could set important precedents for coverage and compensation. For risk managers, staying informed about litigation trends is essential to anticipate potential claims scenarios and ensure sufficient coverage.
Insurance has the potential to facilitate safer AI adoption by offering a financial safety net against unforeseen outcomes. Nonetheless, the conversation stresses the necessity for enhanced collaboration among insurers, technology experts, and regulators to ensure that insurance products evolve in tandem with technological advancements. Risk managers might consider advocating for clearer policy language regarding “silent AI” risks and seek affirmative statements from insurers about AI coverage. Engaging in industry forums and cross-sector discussions is crucial for organizations to keep pace with emerging risks and regulatory expectations.
Looking to the future, technologies such as quantum computing are poised to further complicate the risk landscape. The intersection of AI and quantum will introduce new uncertainties, necessitating agility and informed decision-making from risk managers. The market may evolve to develop standalone AI policies or integrate AI risk into broader insurance products, contingent on regulatory and market developments.
These insights present a compelling message for both risk managers and insurers: AI liability is not a distant concern but a current and dynamic risk requiring proactive engagement. While the insurance sector is beginning to respond through experimentation and emerging product concepts, much of the landscape remains unsettled, particularly for rapidly evolving AI applications. The collaboration between WTW and Professor Lior exemplifies a commitment to guiding clients through this uncertain terrain, merging legal scholarship, market insight, and practical risk advisory expertise. As AI technologies continue to advance and intersect with other emerging risks, the alignment of legal understanding, insurance solutions, and risk management practices will be essential. Ongoing dialogue among industry stakeholders, academia, and policymakers will be critical to ensuring that insurance remains a meaningful facilitator of safe, resilient, and responsible AI deployment.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































