EthiCompass

AI Compliance & Audit for Law Firms

700+ Courts Have Sanctioned AI Hallucinations.
Is Your Firm's AI Governed?

69% of legal professionals now use AI. Only 9% of firms have a written, enforced governance policy. The gap is where sanctions, malpractice claims, and bar discipline live. EthiCompass evaluates every legal AI system across 7 scientifically validated dimensions — producing the immutable evidence your ethics committee, your insurers, and your regulators require.

Explore Enterprise Platform →

Every Jurisdiction. Every Bar Association.
One Obligation: Govern Your AI.

Law firms face a unique dual obligation — govern your own AI use AND advise clients on theirs. Regulatory pressure is coming from bar associations, courts, data protection authorities, and AI-specific legislation simultaneously.

European Union

EU AI Act + GDPR + Bar Guidance

AI in administration of justice classified as high-risk under Annex III Section 8. GDPR Data Protection Impact Assessments required for legal AI processing client data. Legal professional privilege must be preserved in AI systems. Bar associations across member states issuing AI guidance with binding effect.

High-risk deadline: 2 August 2026. Maximum penalty: €35M or 7% of global turnover.

United States

ABA Opinion 512 + State Bars + Court Orders

ABA Formal Opinion 512 confirms all Model Rules apply to AI use. 700+ hallucination cases documented across federal and state courts. Courts imposing standing orders requiring AI disclosure. 75% of lawyers use AI, but only 25% have received any training. Malpractice carriers increasingly conditioning coverage on AI governance policies.

Sanctions: Up to $10,000 fines + 90-day suspensions.

Latin America

Brazil OAB + CNJ Rule 615 + Bill 2338

OAB Recommendation #001/2024 establishes comprehensive AI guidelines for Brazilian lawyers. CNJ Rule 615/2025 regulates AI use across the Brazilian judiciary. Bill 2338 classifies AI in the administration of justice as high-risk, requiring conformity assessments, transparency, and human oversight.

Bill 2338 penalty: Up to R$50M or 2% of Brazilian revenue.

From Isolated Incidents
to Systemic Risk

AI hallucination in legal practice is no longer rare. It is the fastest-growing category of professional misconduct.

700+

Court cases worldwide

128+

Lawyers implicated

2–3/day

New cases, accelerating

$10,000

Highest state court fine

THE PRECEDENT

Mata v. Avianca — Schwartz (2023–2024)

Attorney cited 6 fabricated cases from ChatGPT. Court sanctioned attorney and firm. 'I relied on AI' is not a defence.

Dimension: Factuality & Accuracy

THE ESCALATION

Mostafavi — $10,000 Fine (California, 2025)

21 of 23 cited quotes fabricated by AI. Most costly AI penalty by California state court.

Dimension: Factuality & Accuracy

THE NEW FRONTIER

Failure to Detect Opponent's AI Citations (2025)

Courts now sanctioning lawyers for failing to identify fake citations in opposing briefs. Verification becoming a duty owed by all parties.

Dimension: Regulatory Compliance

BAR DISCIPLINE

Colorado — 90-Day Suspension (2025)

Denver attorney suspended after denying AI use in filings with hallucinated citations. Lying about AI use is a separate violation.

Dimension: Explainability & Transparency

Every Model Rule Applies.
ABA Formal Opinion 512 Confirmed It.

In July 2024, the ABA issued its first formal opinion on generative AI. It confirmed existing rules govern AI fully. The question is whether your firm can demonstrate compliance.

Rule

Rule 1.1

Obligation

Competence

Coverage

Lawyers must understand AI limitations before using it. EthiCompass Factuality & Accuracy dimension quantifies hallucination rates, citation validity, and overruled precedent detection — producing the evidence that demonstrates technological competence.

Rule

Rule 1.6

Obligation

Confidentiality

Coverage

Client data entered into AI systems may be exposed through training, logging, or third-party processing. EthiCompass Privacy & Data Protection dimension evaluates data leakage vectors, privilege preservation, and matter isolation across every AI tool.

Rule

Rule 3.3

Obligation

Candor to Tribunal

Coverage

Every AI-generated citation must be verified before submission. EthiCompass Factuality & Accuracy dimension identifies hallucinated cases, fabricated quotes, and overruled precedent — before they reach a court.

Rule

Rule 5.1 / 5.3

Obligation

Supervision

Coverage

Partners and supervising lawyers are responsible for AI outputs produced by associates and staff. EthiCompass provides the complete audit trail that demonstrates supervisory oversight of every AI-assisted work product.

Rule

Rule 1.5

Obligation

Fees

Coverage

Billing for AI-assisted work raises questions about reasonable fees. EthiCompass Explainability & Transparency dimension documents AI contribution to work product — supporting defensible billing practices.

Rule

Rule 1.4

Obligation

Communication

Coverage

Clients must be informed about AI use in their matters when material. EthiCompass produces client-ready disclosure documentation and AI use transparency reports.

Additional alignment for Brazil (OAB Recommendation #001/2024)

Rule

OAB Rec. #001/2024 — Art. 3

Obligation

Transparency

Coverage

Lawyers must inform clients and courts about AI use in legal work. EthiCompass generates disclosure-ready reports.

Rule

OAB Rec. #001/2024 — Art. 5

Obligation

Data Protection

Coverage

Prohibits inputting confidential client data into AI systems without safeguards. EthiCompass evaluates data leakage and privilege preservation.

Rule

CNJ Rule 615/2025

Obligation

Judiciary AI Governance

Coverage

Regulates AI use in Brazilian courts — requiring human oversight and transparency. EthiCompass maps compliance for litigation AI tools.

Rule

Bill 2338 — Legal AI

Obligation

High-Risk Classification

Coverage

Classifies AI in administration of justice as high-risk. EthiCompass produces conformity assessment evidence aligned with Bill 2338 requirements.

7 Dimensions. Scientifically Validated.
Built for Legal AI.

Our evaluation framework was developed by PhD researchers in AI ethics and regulatory compliance. Each dimension addresses a specific failure mode in legal AI — from hallucinated citations to confidentiality breaches to unexplainable reasoning.

01

Discrimination & Fairness

Detects bias in litigation outcome prediction, client intake screening, sentencing recommendation systems, and legal aid allocation AI — where algorithmic bias can deny access to justice.

02

Toxicity & Harmful Language

Flags unprofessional, biased, or inflammatory language in AI-generated client communications, filings, and legal memoranda — where tone and precision carry professional responsibility implications.

03

Explainability & Transparency

Ensures every AI-driven legal analysis includes traceable reasoning — which sources were consulted, how conclusions were derived, and what confidence levels apply. Essential for court disclosure obligations.

04

Privacy & Data Protection

Evaluates privilege preservation, matter isolation, data leakage risks, and client confidentiality across every AI system — the dimension that maps directly to Rule 1.6 and attorney-client privilege.

05

Factuality & Accuracy

The critical dimension for legal AI. Identifies hallucinated citations, fabricated case law, overruled precedent, incorrect statutory references, and misquoted holdings — the failures that lead to sanctions.

06

Robustness & Resilience

Tests AI stability across jurisdictions, legal systems, languages, and case types — ensuring consistent performance whether processing common law, civil law, or mixed legal systems.

07

Regulatory Compliance

Maps AI behaviour to professional responsibility rules, bar association guidance, court standing orders, and AI-specific legislation across every jurisdiction where the firm practises.

Where Legal AI
Risk Lives

Law firms deploy AI across every practice area. Each creates distinct professional responsibility obligations.

Legal Research & Brief Drafting

The epicentre of the hallucination crisis. AI-assisted legal research and brief drafting is where 700+ court cases have originated — fabricated citations, invented holdings, and overruled precedent presented as good law. Every major AI hallucination sanction traces back to this use case.

Key risk areas

  • Hallucinated case citations
  • Fabricated judicial quotes
  • Overruled precedent cited as current
  • Incorrect statutory references
  • Jurisdictional errors

Contract Review & Due Diligence

AI-driven contract analysis and due diligence creates risk when systems miss critical provisions, fabricate clause interpretations, or fail to flag non-standard terms. The stakes compound in M&A transactions where overlooked liabilities can cost millions.

Key risk areas

  • Missed non-standard clauses
  • Fabricated clause interpretations
  • Overlooked regulatory requirements
  • Incorrect jurisdiction-specific provisions

Client Communications & Advisory

AI-generated client advice carries the full weight of professional responsibility. Errors in AI-assisted communications create malpractice exposure, confidentiality risks when client data flows through AI systems, and Rule 1.4 compliance obligations.

Key risk areas

  • Incorrect legal advice
  • Confidentiality breaches via AI processing
  • Privilege waiver through data exposure
  • Failure to disclose AI involvement

E-Discovery & Document Review

AI-powered document review in litigation creates unique risks — missed privileged documents, incorrect relevance classifications, and defensibility challenges when opposing counsel questions the methodology.

Key risk areas

  • Missed privileged documents
  • Incorrect relevance classification
  • Defensibility of AI methodology
  • Proportionality challenges

Litigation Prediction & Case Strategy

AI systems that predict case outcomes, recommend settlement values, or suggest litigation strategies must be evaluated for bias across case types, jurisdictions, and party demographics — where algorithmic bias can systematically disadvantage certain clients.

Key risk areas

  • Outcome prediction bias
  • Settlement valuation errors
  • Demographic bias in case assessment
  • Overreliance on historical patterns

Peer-Reviewed Methodology

Built on Research. Validated by Publication.
Defensible in Any Forum.

You — more than any other profession — understand the difference between a claim and evidence. When a court, a bar disciplinary panel, or a malpractice insurer asks how your AI governance framework was validated, they expect a defensible methodology. Not a vendor's marketing deck. Not a checklist downloaded from a website. Evidence.

EthiCompass's 7-dimension framework was developed by PhD researchers in AI ethics, bias detection, and regulatory compliance, and validated through peer-reviewed publications. Each dimension is operationalized through 39+ quantitative metrics designed to withstand the scrutiny of regulators, ethics committees, and — if it ever comes to it — opposing counsel.

This matters because ABA Opinion 512, EU AI Act conformity assessments, court standing orders, bar disciplinary proceedings, and malpractice litigation all demand one thing: show your work. EthiCompass produces evidence that survives the scrutiny you apply to everything else.

Explore the 7-Dimension Framework →

Two Ways to Start.
One Standard of Evidence.

OneCheck

Your Firm's AI Compliance Baseline

A partner-grade compliance report that gives managing partners, general counsel, and ethics committee chairs the evidence they need — before the next court standing order or bar inquiry arrives.

  • Every legal AI system scored across all 7 dimensions
  • ABA Model Rules + EU AI Act + OAB gap analysis in a single report
  • Citation accuracy and hallucination rate quantification
  • Privilege preservation and data leakage assessment
  • Ethics committee–ready documentation for firm governance review

Best for: Managing partners, general counsel, and ethics committee chairs who need to understand their firm's AI compliance posture before the August 2026 EU AI Act deadline.

Enterprise

Full Platform

Continuous Legal AI Governance

Ongoing monitoring across every legal AI system — built for Am Law 200 firms, global practices, and corporate legal departments that need immutable evidence with 7+ year retention.

  • Everything in OneCheck, plus:
  • Continuous real-time monitoring across all legal AI systems
  • Real-time alerting when hallucination or compliance scores drift
  • Multi-regulation mapping: AI Act + ABA Rules + OAB + CNJ
  • Immutable audit trail with 7+ year retention for litigation defence
  • Board-ready dashboards for ethics committees and firm leadership
  • Client-facing AI governance reports for advisory credibility

Best for: Am Law 200 firms, global practices, and corporate legal departments deploying AI at scale across multiple practice areas and jurisdictions.

The Cost of Ungoverned AI.
The Value of Proof.

Risk

Court sanctions (AI hallucination)

Exposure

$500–$10,000+

With EthiCompass

Citation verification and factuality scoring before filing

Risk

Bar discipline

Exposure

Censure to 90-day suspension

With EthiCompass

Professional responsibility compliance mapping and audit trail

Risk

Malpractice claims (AI-related)

Exposure

$500K–$10M+

With EthiCompass

Documented AI governance as standard-of-care evidence

Risk

EU AI Act — administration of justice

Exposure

€35M or 7% of global turnover

With EthiCompass

High-risk conformity assessment with FRIA documentation

Risk

Client loss (governance gap)

Exposure

Revenue erosion + reputation

With EthiCompass

Client-facing AI governance certification and transparency reports

Risk

Malpractice insurance (coverage conditions)

Exposure

Premium increases or denial

With EthiCompass

Insurer-ready AI risk management documentation

Risk

Brazil Bill 2338 — legal AI

Exposure

Up to R$50M or 2% of Brazilian revenue

With EthiCompass

Bill 2338 conformity assessment with OAB alignment evidence

The Governance Gap

69%

use AI

9%

have policy

700+

cases

2–3/day

new, accelerating

Your Clients Expect You to Govern AI.
Your Regulators Will Require It.
Your Insurers Already Do.

Law firms face a dual obligation no other industry shares: govern your own AI use AND advise clients on their AI compliance. A firm that cannot demonstrate its own AI governance has no credibility recommending compliance to others.

The firms that act now — before the EU AI Act high-risk deadline, before malpractice carriers mandate AI governance, before the next sanctions ruling — will define the standard. The firms that wait will be measured against it.