EthiCompass

AI Compliance & Audit Platform

Prove Your AI
Is Compliant.
Before the Regulator Asks.

The only AI compliance platform built on peer-reviewed research. 7 scientifically validated dimensions. Immutable audit trails. Defensible evidence for the EU AI Act and beyond.

Or explore Enterprise →
7 Dimensions·Peer-Reviewed·EU AI Act Ready
Confidential
Doc Ref: ETHIC-RPT-2026-00147
Version: 1.0 — Final
Eval ID: eval_mock_eurobank
Date: March 15, 2026

EthiCompass

AI Ethics & Compliance
Evaluation Report

EuroBank Virtual Assistant v3.2

Generative AI — Financial Services

HIGH RISK — EU AI Act Annex III

Risk

HIGH

Intake

7.6

Score

7.8

Client

EuroBank AG

Frankfurt, Germany

Evaluator

EthiCompass

7-Dimension Framework

Sample Report — Demonstration Purposes

EthiCompassCONFIDENTIAL

Dimensional Scorecard

Discrimination & Fairness
7.2COND
Toxicity & Harmful Lang.
9.4PASS
Explainability & Transp.
6.1ACTION
Privacy & Data Protection
8.5PASS
Factuality & Accuracy
7.8COND
Robustness & Resilience
8.1COND
Regulatory Compliance
7.5COND
Composite Score
7.8/10CONDITIONAL
Page 6 of 17eval_mock_eurobank_2026Q1
EthiCompassCONFIDENTIAL

Critical Findings

P07 Day Deadline

Incorrect Deposit Insurance Information

Chatbot states €200,000 limit when actual EU limit is €100,000 per depositor.

P014 Day Deadline

Missing MiFID II Suitability Assessment

23% of recommendation conversations skip required risk profiling step.

Key Recommendations

PActionRef
P0Fix deposit insurance to €100KDir. 2014/49
P0Add MiFID II suitability gateMiFID II Art.25
P1Add AI disclosure to responsesAI Act Art.52
P1Implement explanation moduleAI Act Art.13
P1Add confidence indicatorsAI Act Art.14
Page 7 of 17eval_mock_eurobank_2026Q1
EthiCompassCONFIDENTIAL

Risk Classification — ETHI-202

MINIMAL
LIMITED
HIGH
UNACC.

11 / 15 points — HIGH RISK

FactorPtsMax
Vulnerable Groups Affected33
Sector in EU AI Act Annex III33
Decision Type13
Reversibility12
Population Scale (2.3M)33
TOTAL1115

Regulatory Implications

Conformity assessment (Art. 43)
EU AI database registration (Art. 49)
Fundamental rights assessment (Art. 27)
Quality management system (Art. 17)
Post-market monitoring (Art. 72)
Incident reporting (Art. 73)
Page 4 of 17eval_mock_eurobank_2026Q1
Explore the Full 17-Page Report

Your AI Systems Are a Regulatory Liability.
Most Organizations Just Don’t Know It Yet.

Every AI system your organization deploys — from customer-facing chatbots to internal decision engines — is now subject to regulatory scrutiny.

The EU AI Act is in full enforcement. Fines reach up to 7% of global revenue. And regulators aren’t asking whether you use AI responsibly. They’re asking you to prove it.

Manual review doesn’t scale. Generic compliance tools weren’t built for AI. And point-in-time assessments miss the drift that turns a compliant system into a liability overnight.

$2.3M

Average cost of a single AI compliance incident

7%

Maximum EU AI Act fine as percentage of global revenue

73%

Of enterprises deploying AI have no formal governance framework

3 wks → sec

Time to detect compliance issues: manual vs. EthiCompass

Built on Research, Not Marketing Claims

The Only AI Compliance Methodology Validated by Published Science.

Most AI governance platforms ask you to trust their proprietary engine. EthiCompass is different.

Our 7-dimension evaluation framework was developed by PhD researchers in AI ethics, bias detection, and regulatory compliance. It is validated through peer-reviewed publications — not vendor whitepapers.

When a regulator asks how you evaluate AI compliance, you don’t point to a software vendor’s marketing page. You point to published research.

Our team includes researchers with doctoral expertise in AI/ML ethics, and we maintain ongoing collaboration with academic institutions for continuous methodology validation.

Explore the 7-Dimension Framework →

PEER-REVIEWED PUBLICATIONS

Research published in AI bias, compliance frameworks, and ethical evaluation methodology

PhD RESEARCH TEAM

In-house researchers with doctoral expertise in AI/ML and ethics

ACADEMIC COLLABORATION

Ongoing partnership with universities and research centers for methodology validation

TRANSPARENT METHODOLOGY

Every scoring criterion is documented, versioned, and publicly auditable

The Framework

7 Dimensions. Complete Coverage.
Every AI System. Every Risk.

Each dimension is independently scored, fully traceable, and mapped to specific EU AI Act requirements. No black boxes. No aggregate scores that hide what matters.

01

DISCRIMINATION & FAIRNESS

Detects bias across protected groups using statistical parity testing and demographic parity analysis

Art. 10

02

TOXICITY & HARMFUL LANGUAGE

Identifies harmful language and unsafe content before it reaches your audience. Severity-classified from low to critical

Art. 9

03

EXPLAINABILITY & TRANSPARENCY

Measures whether AI decisions can be understood and explained to the people they affect and the regulators who oversee them

Art. 13

04

PRIVACY & DATA PROTECTION

Monitors PII exposure, data minimization, and cross-regulation data governance requirements

Art. 10, GDPR

05

FACTUALITY & ACCURACY

Verifies AI outputs against source material to detect hallucinations, fabrication, and unsupported claims

Art. 15

06

ROBUSTNESS & RESILIENCE

Tests resistance to adversarial attacks, prompt injection, and jailbreak attempts that could compromise system integrity

Art. 15

07

REGULATORY COMPLIANCE

Maps every AI system against jurisdiction-specific requirements including EU AI Act, GDPR, and sectoral regulation. The foundation all other dimensions build on

Art. 9–15

Every flag is traceable. Every score is transparent. Every dimension maps to a specific regulatory requirement. No aggregate scores. No black boxes. No opinions.

Two Ways to Start.
One Standard of Rigor.

OneCheck

Your AI Compliance Baseline

A comprehensive audit of your AI systems delivered in 3 weeks.

What you get

  • Executive-grade compliance report scored across all 7 dimensions
  • Regulatory gap analysis mapped to EU AI Act requirements
  • Prioritized remediation roadmap with risk-ranked action items
  • Immutable audit documentation you can present to regulators

Best for

Organizations that need to understand their AI risk posture before committing to a platform.

Enterprise

Full Platform

Continuous AI Compliance

The full EthiCompass platform for ongoing monitoring, scoring, and audit-ready documentation.

Everything in OneCheck, plus

  • Continuous real-time monitoring across all AI systems
  • Real-time alerting when compliance scores drift
  • Immutable audit trail with 7+ year retention, digitally signed
  • Customizable policy layers that tighten standards (never weaken)
  • Board-ready dashboards and regulator-ready documentation
  • API integration with your existing AI infrastructure

Best for

Organizations deploying AI at scale that need continuous compliance assurance.

Proven in Production.
Not Just in Pitch Decks.

75%

Reduction in compliance violations detected

100%

Audit trail coverage for all AI decisions

Seconds

Time to detect compliance drift

7+ Years

Immutable audit trail retention

“Deployed with a Fortune 500 financial services organization managing 100+ AI systems in a regulated environment. $265K first-year engagement. Live in production.”

SOC 2 ControlsEU AI Act AlignedGDPR CompliantEncrypted End-to-End

See the Output

A 17-Page Compliance Report.
For Every AI System You Deploy.

Confidential
Doc Ref: ETHIC-RPT-2026-00147
Version: 1.0 — Final
Eval ID: eval_mock_eurobank
Date: March 15, 2026

EthiCompass

AI Ethics & Compliance
Evaluation Report

EuroBank Virtual Assistant v3.2

Generative AI — Financial Services

HIGH RISK — EU AI Act Annex III

Risk

HIGH

Intake

7.6

Score

7.8

Client

EuroBank AG

Frankfurt, Germany

Evaluator

EthiCompass

7-Dimension Framework

Sample Report — Demonstration Purposes

EthiCompassCONFIDENTIAL

Dimensional Scorecard

Discrimination & Fairness
7.2COND
Toxicity & Harmful Lang.
9.4PASS
Explainability & Transp.
6.1ACTION
Privacy & Data Protection
8.5PASS
Factuality & Accuracy
7.8COND
Robustness & Resilience
8.1COND
Regulatory Compliance
7.5COND
Composite Score
7.8/10CONDITIONAL
Page 6 of 17eval_mock_eurobank_2026Q1
EthiCompassCONFIDENTIAL

Critical Findings

P07 Day Deadline

Incorrect Deposit Insurance Information

Chatbot states €200,000 limit when actual EU limit is €100,000 per depositor.

P014 Day Deadline

Missing MiFID II Suitability Assessment

23% of recommendation conversations skip required risk profiling step.

Key Recommendations

PActionRef
P0Fix deposit insurance to €100KDir. 2014/49
P0Add MiFID II suitability gateMiFID II Art.25
P1Add AI disclosure to responsesAI Act Art.52
P1Implement explanation moduleAI Act Art.13
P1Add confidence indicatorsAI Act Art.14
Page 7 of 17eval_mock_eurobank_2026Q1
EthiCompassCONFIDENTIAL

Risk Classification — ETHI-202

MINIMAL
LIMITED
HIGH
UNACC.

11 / 15 points — HIGH RISK

FactorPtsMax
Vulnerable Groups Affected33
Sector in EU AI Act Annex III33
Decision Type13
Reversibility12
Population Scale (2.3M)33
TOTAL1115

Regulatory Implications

Conformity assessment (Art. 43)
EU AI database registration (Art. 49)
Fundamental rights assessment (Art. 27)
Quality management system (Art. 17)
Post-market monitoring (Art. 72)
Incident reporting (Art. 73)
Page 4 of 17eval_mock_eurobank_2026Q1
Explore the Full 17-Page Report

Built for the People Who Own AI Risk.

FOR THE CRO

Quantified risk posture across every AI system. Board-ready reporting. Defensible evidence for regulators.

FOR THE DPO

EU AI Act and GDPR mapped in a single compliance view. Cross-regulation audit-ready evidence your regulators can trust.

FOR THE CISO

API-first integration. SOC 2 controls. Zero latency impact on production. Scales to millions of evaluations.

FOR THE BOARD

Governance that protects reputation and reduces regulatory exposure. One AI incident costs $2.3M. Prevention is cheaper.

Your AI Systems Are Already Deployed.
Your Compliance Evidence Should Be Too.

Start with a OneCheck audit to understand your risk posture, or talk to our team about the Enterprise platform.