EthiCompass

AI Compliance & Audit for Insurance

Every Algorithmic Decision Is a Regulatory Event.
Prove Yours Are Fair.

Insurers face simultaneous AI enforcement from EU regulators (AI Act, EIOPA, DORA), US state commissioners (NAIC pilot, Colorado AI Act), and emerging Latin American frameworks. EthiCompass evaluates every underwriting, claims, and pricing AI system across 7 scientifically validated dimensions — producing the immutable evidence your examiners, your board, and your policyholders require.

Explore Enterprise Platform →

Three Continents. Simultaneous Enforcement.
No Grace Period.

Insurance is the most enforcement-ready sector for AI compliance. Unlike industries where regulation is still phasing in, insurers face active examinations, filed lawsuits, and published supervisory expectations — right now.

European Union

EU AI Act + EIOPA + Solvency II + DORA

Life and health insurance AI classified as high-risk under Annex III. EIOPA's August 2025 Opinion mandates two-step impact assessments and board-level AI governance. DORA requires operational resilience for all AI/ICT systems. Solvency II integrates AI model risk into ORSA.

Deadline: 2 August 2026. Max penalty: €35M or 7% of global turnover.

United States

NAIC + State Laws + Class Action Exposure

23 states adopted the NAIC Model Bulletin. 12-state AI Evaluation Tool pilot running Jan–Sep 2026. Colorado AI Act takes effect June 2026. California, New York, Texas, and Florida have enacted insurance-specific AI laws.

Active litigation: $10M–$100M+ class action settlements.

Latin America

Brazil Bill 2338 + LGPD + SUSEP

Insurance AI classified as high-risk under Brazil's AI regulation bill. LGPD classifies health insurance data as sensitive — explicit consent required. ANPD enforcement escalating: €12M+ in fines in Q1 2025 alone.

LGPD penalty: Up to 2% of Brazilian revenue.

The Enforcement Actions Reshaping
Insurance AI

These are not future risks. These are active lawsuits, regulatory investigations, and supervisory actions — shaping how every insurer must govern AI today.

CLASS ACTION — CLAIMS DISCRIMINATION

State Farm AI Lawsuit (Oct 2024)

Class action alleging discriminatory AI-driven claims processing that systematically disadvantaged Black policyholders — producing lower settlements and higher denial rates through algorithmic bias embedded in claims triage models.

Dimension: Discrimination & Fairness

REGULATORY ACTION — ALGORITHMIC CLAIMS DENIAL

Cigna AI Claims Denial Case

Investigation into Cigna's use of an AI system that allegedly rubber-stamped claim denials at scale — reviewing and rejecting claims in bulk without individualized assessment, violating fair claims practices obligations.

Dimension: Explainability & Transparency

SUPERVISORY EXAMINATION — NAIC MULTISTATE PILOT

12-State AI Evaluation Tool (Jan–Sep 2026)

The NAIC's multistate pilot program deploys a standardized AI Evaluation Tool across 12 state insurance departments simultaneously — creating the first coordinated examination framework for insurer AI systems.

Dimension: Regulatory Compliance

EMERGING THREAT — SYNTHETIC VOICE FRAUD

475% Increase in Synthetic Voice Fraud (Pindrop, 2024)

Pindrop's 2024 Voice Intelligence Report documents a 475% increase in deepfake and synthetic voice fraud targeting insurance claims lines — exposing catastrophic gaps in AI-driven identity verification and claims intake systems.

Dimension: Robustness & Resilience

7 Dimensions. Scientifically Validated.
Built for Regulated Insurance.

Our evaluation framework was developed by PhD researchers and validated through peer-reviewed publications. Each dimension maps to specific regulatory requirements across EU, US, and Latin American insurance frameworks.

01

Discrimination & Fairness

Detects demographic bias in underwriting algorithms, claims adjudication patterns, pricing models, and coverage decisions.

EU AI Act Art 10 + FRIA, NAIC, NY DFS, Colorado AI Act

02

Toxicity & Harmful Language

Flags inappropriate language in AI-generated policyholder communications, claims correspondence, and adverse action notices.

IDD, FCRA adverse actions, LGPD transparency

03

Explainability & Transparency

Ensures every AI-driven underwriting decision, claims determination, and pricing adjustment includes traceable reasoning.

EU AI Act Art 13, EIOPA, California SB 1120, Brazil 2338

04

Privacy & Data Protection

Detects PII exposure in AI outputs, enforces data minimization in actuarial models, and verifies policyholder data doesn't leak across boundaries.

GDPR + AI Act, state privacy laws, LGPD, DORA

05

Factuality & Accuracy

Identifies hallucinated information in AI-generated claims assessments, coverage determinations, and regulatory filings.

Solvency II actuarial function, NAIC, FCRA

06

Robustness & Resilience

Tests AI stability under adversarial inputs, data drift, and edge cases across diverse policyholder populations and market conditions.

EU AI Act Art 15, DORA, NAIC AIS Program

07

Regulatory Compliance

Maps AI behavior to all applicable regulations — EU AI Act, EIOPA, Solvency II, NAIC, state laws, LGPD — producing multi-jurisdictional evidence.

All frameworks: integrated mapping

Where Insurance AI
Compliance Risk Lives

Insurers deploy AI across the entire policy lifecycle. Each stage creates distinct compliance obligations across multiple jurisdictions.

Underwriting & Risk Assessment

AI systems that evaluate insurability and assign risk scores are explicitly classified as high-risk under EU AI Act Annex III. Every underwriting model must demonstrate fairness across protected groups and produce auditable decision trails.

Compliance requirements

  • EU: High-risk classification under Annex III — full conformity assessment, FRIA, and EIOPA two-step impact assessment required
  • US: NAIC Model Bulletin compliance, state-specific underwriting fairness laws, Colorado AI Act disparate impact testing
  • Brazil: Bill 2338 high-risk classification, LGPD sensitive data consent for health insurance underwriting

What EthiCompass evaluates

EthiCompass evaluates underwriting AI for demographic parity across protected groups, tests for proxy discrimination in risk factors, and produces the immutable audit trail required by both EU and US regulators.

Claims Adjudication & Processing

AI systems that triage, assess, and recommend claims decisions create direct policyholder harm when they operate with bias or without transparency. Claims AI is the primary target of both class action litigation and regulatory examination.

Compliance requirements

  • EU: AI Act transparency obligations, EIOPA fair claims expectations, IDD policyholder protection requirements
  • US: State Fair Claims Practices Acts, NAIC AI Evaluation Tool examination, class action exposure for systematic denial patterns
  • Brazil: LGPD right to explanation for automated decisions, SUSEP consumer protection requirements

What EthiCompass evaluates

EthiCompass monitors claims AI for denial rate disparities, tests explanation adequacy for every adverse decision, and documents fairness metrics that withstand both regulatory examination and discovery in litigation.

Fraud Detection & Investigation

AI fraud scoring creates a dual compliance challenge: models must be robust enough to detect sophisticated fraud while ensuring that detection patterns don't systematically target specific demographic groups.

Compliance requirements

  • EU: DORA operational resilience requirements, AI Act robustness standards, GDPR profiling safeguards
  • US: State fraud investigation requirements, fair investigation obligations, FCRA adverse action notices
  • Brazil: LGPD profiling protections, ANPD enforcement of automated decision transparency

What EthiCompass evaluates

EthiCompass tests fraud AI for adversarial resilience and demographic fairness simultaneously — ensuring models detect fraud without creating discriminatory investigation patterns.

Policyholder Communications

AI-generated renewal notices, adverse action communications, claims status updates, and coverage explanations must meet jurisdiction-specific language, disclosure, and transparency requirements.

Compliance requirements

  • EU: IDD insurance distribution transparency, AI Act Art 13 disclosure requirements, GDPR right to explanation
  • US: FCRA adverse action notice compliance, state-specific disclosure requirements, plain language mandates
  • Brazil: LGPD transparency rights, consumer protection code communication standards

What EthiCompass evaluates

EthiCompass evaluates AI-generated communications for regulatory compliance, harmful language, factual accuracy, and PII exposure — across every jurisdiction where your policyholders reside.

Pricing & Product Design

AI-driven pricing optimization, product recommendation, and coverage design create significant fairness and transparency obligations — particularly when pricing correlates with protected characteristics through proxy variables.

Compliance requirements

  • EU: AI Act Annex III high-risk classification for insurance pricing, EIOPA pricing fairness expectations, Solvency II actuarial standards
  • US: State rating law compliance, unfair discrimination prohibitions, Colorado AI Act pricing transparency requirements
  • Brazil: SUSEP pricing regulation, LGPD consent requirements for health insurance pricing data

What EthiCompass evaluates

EthiCompass tests pricing AI for proxy discrimination, evaluates rating factor transparency, and produces the actuarial fairness evidence that Solvency II and state regulators require.

Peer-Reviewed Methodology

Built on Research. Validated by Publication.
Defensible Under Examination.

Insurance examiners don't accept vendor claims at face value. When a state insurance department or EIOPA supervisory team asks how your AI compliance framework was validated, they expect a defensible methodology — not a marketing deck. EthiCompass's 7-dimension framework was developed by PhD researchers in AI ethics, bias detection, and regulatory compliance, and validated through peer-reviewed publications.

Each dimension is operationalized through 39+ quantitative metrics designed for actuarial-grade precision. Fairness testing uses Demographic Parity Ratio targets of 0.8–1.25, aligned with the statistical thresholds that insurance regulators and courts recognize. Every metric is documented, reproducible, and auditable — because in insurance, "we tested for bias" without methodology is not a defense.

This matters because the NAIC AI Evaluation Tool pilot, the EU AI Act conformity assessments, and class action discovery all demand one thing: show your work. EthiCompass produces evidence that withstands the scrutiny of examiners, supervisors, and expert witnesses.

Explore the 7-Dimension Framework →

Two Ways to Start.
One Standard of Evidence.

OneCheck

Your AI Compliance Baseline

Know where you stand before the NAIC pilot examines you.

  • Every AI system scored across all 7 dimensions
  • EU AI Act + EIOPA + NAIC gap analysis in a single report
  • FRIA-ready evidence for high-risk insurance AI systems
  • Prioritized remediation roadmap ranked by regulatory risk
  • Examination-ready documentation for state and EU regulators

Best for: Insurers that need to understand their compliance posture before the 12-state NAIC pilot and the August 2026 EU AI Act deadline.

Enterprise

Full Platform

Continuous AI Compliance

Ongoing monitoring aligned with Solvency II record-keeping requirements.

  • Everything in OneCheck, plus:
  • Continuous real-time monitoring across all AI systems
  • Real-time alerting when compliance scores drift
  • Multi-regulation mapping: AI Act + EIOPA + NAIC + LGPD
  • Immutable audit trail with 7+ year retention
  • Board-ready dashboards for risk and compliance committees
  • Solvency II ORSA integration for AI model risk

Best for: Insurers deploying AI at scale that need continuous compliance assurance across EU, US, and Latin American regulations.

One Platform. Every Framework.
Examiner-Ready Evidence.

Insurance operates under dozens of overlapping regulatory regimes. EthiCompass maps your AI compliance across all of them simultaneously.

Framework

EU AI Act (Annex III)

Jurisdiction

European Union

Coverage

Full high-risk conformity assessment, FRIA support, Articles 9–15 mapping

Framework

EIOPA AI Opinion

Jurisdiction

European Union

Coverage

Two-step impact assessment, board-level governance documentation

Framework

Solvency II

Jurisdiction

European Union

Coverage

AI model risk integration into ORSA, actuarial function validation

Framework

DORA

Jurisdiction

European Union

Coverage

Operational resilience testing, ICT risk management for AI systems

Framework

IDD

Jurisdiction

European Union

Coverage

Insurance distribution transparency, policyholder communication compliance

Framework

NAIC Model Bulletin

Jurisdiction

United States (23 states)

Coverage

Full alignment with AI governance expectations and examination readiness

Framework

NAIC AI Evaluation Tool

Jurisdiction

United States (12-state pilot)

Coverage

Examination-ready evidence packs, standardized response documentation

Framework

Colorado AI Act

Jurisdiction

Colorado

Coverage

Disparate impact testing, algorithmic impact assessment support

Framework

NY DFS Circular

Jurisdiction

New York

Coverage

AI governance framework alignment, fairness testing documentation

Framework

California SB 1120

Jurisdiction

California

Coverage

Transparency and disclosure compliance for AI-driven decisions

Framework

FCRA

Jurisdiction

United States (federal)

Coverage

Adverse action notice compliance, credit-related insurance scoring

Framework

State Fair Claims Practices

Jurisdiction

United States (all states)

Coverage

Claims fairness evidence, denial rate parity documentation

Framework

Brazil Bill 2338

Jurisdiction

Brazil

Coverage

High-risk AI classification compliance, impact assessment support

Framework

LGPD

Jurisdiction

Brazil

Coverage

Sensitive data processing compliance, automated decision transparency

The Cost of Non-Compliance.
The Value of Proof.

Risk

EU AI Act violation

Exposure

€35M or 7% of global turnover

With EthiCompass

Full conformity assessment evidence

Risk

Solvency II AI model failure

Exposure

Capital add-on + supervisory action

With EthiCompass

ORSA-integrated AI risk documentation

Risk

DORA ICT non-compliance

Exposure

€10M or 5% of turnover

With EthiCompass

Operational resilience evidence for AI systems

Risk

Class action settlement

Exposure

$10M–$100M+ per action

With EthiCompass

Documented fairness testing as litigation defense

Risk

State market conduct examination

Exposure

$1M–$50M per state + consent order

With EthiCompass

Examination-ready evidence packs

Risk

Texas AI discrimination finding

Exposure

License suspension + penalties

With EthiCompass

Continuous bias monitoring with audit trail

Risk

LGPD violation

Exposure

Up to 2% of Brazilian revenue

With EthiCompass

Sensitive data compliance documentation

Fairness Is Not a Defense Strategy.
It Is an Obligation.

Insurance AI compliance is not a US state problem, a European problem, or a Latin American problem. It is a global obligation — enforced by regulators, examined by supervisors, and tested by plaintiffs' attorneys simultaneously. Every underwriting algorithm, every claims model, every pricing system creates a regulatory event that demands defensible evidence.

The NAIC multistate pilot is running now. The EU AI Act deadline is August 2026. Class action firms are hiring data scientists. The window for voluntary compliance — the window where getting ahead of enforcement is an advantage rather than a minimum requirement — is closing.