Here's a sobering statistic: 62% of firms now have cyber insurance—up from 49% in 2024. This represents massive progress in cyber insurance adoption. Yet at the same time, security researchers report that attacks powered by artificial intelligence are becoming the most significant emerging threat of 2025.
The disconnect is critical: organizations are buying cyber insurance to protect against emerging threats, but most policies remain "silent" on AI-related exposures. This means organizations believe they're protected when they may actually have significant coverage gaps.
For years, cyber attacks followed predictable patterns. Attackers would:
Each attack required significant manual work. A phishing campaign might target thousands of people in hopes that a small percentage would click. A ransomware attack would follow a standard playbook. The attacker's success rate was limited by the number of hours they could dedicate to each campaign.
AI changes this equation fundamentally.
Traditional phishing emails are often obvious: poor grammar, generic greetings, suspicious sender addresses, obvious requests for credentials.
AI-powered phishing is different:
The result: phishing success rates are increasing dramatically. Attacks that required hours of manual research now take minutes with AI assistance.
Ransomware has traditionally been crafted by human developers—individuals with deep technical knowledge who code malware, test it, and deploy it.
AI is changing this:
The result: ransomware is becoming more sophisticated and more difficult to defend against.
Social engineering has always relied on human psychology: understanding what will convince someone to take a specific action.
AI amplifies this:
One of the most dangerous AI attack vectors involves third-party compromise:
The result: massive supply chain compromises affecting thousands of organizations simultaneously.
Here's where cyber insurance policies fall short: most were written before AI-enhanced attacks became prevalent. Review the average cyber policy and you'll find coverage for:
But look for specific mention of "AI-powered attacks" or "AI-enhanced threats" and you won't find it.
This creates several coverage questions:
Coverage Question #1: Is an AI-generated phishing attack covered differently than a traditional phishing attack?
If an AI-generated spear-phishing email successfully compromises a system, is that covered? Some policies might categorize this as a "user error" (social engineering is not always covered). Others might cover it. The policy language is typically ambiguous.
Coverage Question #2: What about AI-optimized ransomware?
Ransomware coverage is common, but what if the ransomware was generated or optimized using AI? Does this change coverage determination? Some underwriters have begun arguing that certain types of AI-generated malware might fall outside traditional ransomware coverage.
Coverage Question #3: What about cost escalation?
AI-powered attacks often result in higher costs than traditional attacks: faster propagation means more systems compromised, more sophisticated encryption means longer recovery time, more advanced lateral movement means more extensive forensics required. Do traditional coverage limits account for the higher costs of AI-powered incidents?
Coverage Question #4: What about third-party AI attacks?
Supply chain attacks increasingly leverage AI. If a vendor is compromised via an AI-powered attack and that compromise cascades to your organization, what coverage applies? Third-party cyber liability coverage often has significant exclusions and limitations—and most were written before AI-powered supply chain attacks became common.
Cyber insurance pricing has historically been based on:
All of this is based on the assumption that the risk landscape is relatively stable—that tomorrow's threat profile resembles today's.
But AI changes this. The threat landscape is accelerating. Attacks that were theoretical last year are common today. Attack capabilities are doubling every 12-18 months as AI models improve.
Traditional cyber policies, priced based on historical data, systematically underestimate the cost of AI-powered attacks. Underwriters may have priced ransomware coverage based on average recovery costs of $500K-$2M. But an AI-optimized ransomware attack might result in $5M-$10M in total losses due to faster propagation and more complex recovery requirements.
This creates an adverse selection problem: organizations with AI-powered attacks experience higher costs than their coverage limits, while underwriters experience claim costs higher than premiums collected.
Modern cyber underwriting requires understanding:
What is the current threat landscape? What types of attacks are most common? Which are growing fastest? How are threat actors using AI?
What emerging AI attack vectors are on the horizon? Researchers are currently working on AI capabilities that, once deployed by attackers, could revolutionize cyber attacks again. What should underwriters be watching for?
How are different organizations vulnerable to AI attacks? Some organizations are more vulnerable to AI-powered phishing (those with less-sophisticated security training), others to AI-optimized malware (those running legacy systems), others to AI-enhanced supply chain attacks (those with extensive vendor relationships).
What are peers doing? How are other underwriters adjusting cyber policy language to address AI threats? What coverage enhancements are leading carriers implementing?
What controls actually reduce AI attack risk? Traditional security controls (firewalls, antivirus, user training) help, but do they effectively mitigate AI-powered attacks? What new controls are emerging to specifically address AI-enhanced threats?
Getting complete, current answers to these questions is practically impossible with manual research. Yet without this intelligence, cyber underwriters are pricing risk based on outdated assumptions.
The cyber insurance industry is beginning to recognize the AI threat:
But these reactions are largely defensive. The industry is attempting to reduce exposure to AI-powered attacks rather than developing solutions to help organizations manage them.
Rather than restrictive coverage that excludes AI-related exposures, organizations need:
Explicit AI Coverage: Clear language defining what AI-related cyber incidents are covered, what coverage limits apply, and what conditions must be met.
Enhanced Detection Requirements: Requirements for implementing threat detection specifically designed to catch AI-powered attacks (behavioral analysis, anomaly detection, threat intelligence integration).
Incident Response Readiness: Requirements for incident response planning and testing that specifically addresses AI-powered attack scenarios.
Supply Chain Intelligence: Requirements for understanding and monitoring third-party cyber risk, especially as AI-powered supply chain attacks become more common.
Continuous Risk Assessment: Rather than annual renewal, continuous assessment of emerging threats and dynamic coverage adjustments.
From an underwriting perspective, AI-powered attacks represent both a challenge and an opportunity:
The Challenge: Traditional risk assessment and pricing models are increasingly inadequate. Underwriters who continue using legacy approaches will experience adverse claims experience.
The Opportunity: Underwriters who understand AI-powered attack patterns and can assess organizational vulnerabilities to these attacks will be able to price cyber risk more accurately, select better risks, and build more sustainable portfolios.
The question for cyber underwriters is: Will you attempt to exit cyber insurance due to AI-related losses? Or will you invest in the intelligence and expertise needed to understand and price AI-powered cyber risks?
This is where the irony becomes clear: the best defense against AI-powered cyber attacks may be AI-powered cyber underwriting.
An intelligent system designed specifically for cyber insurance could:
Synthesize Real-Time Threat Intelligence: Continuously monitor emerging threats, identify which ones are AI-powered, and assess the implications for underwritten risks.
Identify Vulnerability Patterns: Recognize which organizations, industries, and risk profiles are most vulnerable to AI-powered attacks.
Assess Control Effectiveness: Analyze which security controls and practices are most effective at mitigating AI-powered attack risk.
Price Dynamic Risk: Adjust pricing not annually but continuously, as threat landscape changes warrant.
Optimize Coverage Terms: Develop policy language that explicitly addresses AI-powered attack scenarios while maintaining clarity for all parties.
Support Claims Management: When AI-powered attacks occur, help claims professionals quickly assess coverage, identify policy language that applies, and coordinate response.
The cyber insurance market is at a critical juncture. Forty percent of organizations still don't have cyber insurance, and those that do often have inadequate coverage for AI-powered threats. Meanwhile, AI-enhanced attacks are accelerating.
The insurance industry has an opportunity to lead in this transition—to develop products and underwriting practices that help organizations navigate AI-powered cyber risk. But this requires a fundamental shift from traditional approaches to new methodologies built on real-time threat intelligence and continuous risk assessment.
Organizations should demand from their cyber insurers:
Underwriters should recognize that:
Is your cyber policy ready for AI-powered threats? Visit https://sagesure.io to explore how AI-powered intelligence can help underwriters understand and price cyber risk in the age of AI-enhanced attacks