Insurers: AI Poses an Uninsurable Risk

▼ Summary
– Major insurers are seeking regulatory approval to exclude AI-related liabilities from corporate insurance policies.
– Insurers describe AI models as “too much of a black box,” making their outputs unpredictable and risky.
– Recent incidents include Google’s AI falsely accusing a company, leading to a $110 million lawsuit, and Air Canada honoring a discount invented by its chatbot.
– Fraudsters used AI to clone an executive’s likeness and steal $25 million during a deceptive video call.
– Insurers fear systemic risk from thousands of simultaneous claims caused by a single AI error more than individual large payouts.
The growing adoption of artificial intelligence is creating a new and troubling dilemma for the insurance industry, as major providers now argue that certain AI-related risks have become uninsurable. According to recent reports, insurers such as AIG, Great American, and WR Berkley are seeking approval from U.S. regulators to explicitly exclude liabilities tied to artificial intelligence from standard corporate insurance policies. One underwriter described the core of the problem to the Financial Times, pointing out that the inner workings of many advanced AI systems remain a “black box,” making their outputs unpredictable and their risks difficult to quantify.
This industry-wide apprehension is not without foundation. A series of high-profile incidents highlight the real-world financial dangers posed by AI errors. For instance, Google’s AI Overview feature recently produced a false statement about a solar company, resulting in a $110 million lawsuit filed in March. In another case, Air Canada was legally compelled to honor a refund policy that had been entirely fabricated by its own customer service chatbot. Perhaps more alarming, criminals used a digitally cloned version of a senior executive in a video call to deceive staff at the London-based firm Arup, successfully stealing $25 million.
However, the primary fear for insurers extends beyond these individual, albeit costly, events. The true nightmare scenario involves systemic risk. The industry is structured to manage a single, large financial loss affecting one company. What it is not equipped to handle is a single, widespread AI failure that triggers thousands of simultaneous claims across countless businesses at the same moment. As an executive from the global professional services firm Aon explained, a $400 million loss to one client is manageable. A catastrophic error from an agentic AI system causing 10,000 separate losses simultaneously is not. This potential for correlated, mass-trigger events represents a fundamental challenge to the very model of risk pooling that insurance relies upon.
(Source: TechCrunch)





