Published April 1, 2026 | Updated April, 2026

Can You Trust AI for Legal Research? What Lawyers Get Wrong (2026 Guide)

nexlaw-knowledge-center
Can You Trust AI for Legal Research? What Lawyers Get Wrong (2026 Guide)
Quick answer:

AI can be trusted for legal research only when used correctly. General-purpose tools like ChatGPT show high hallucination rates on legal queries in multiple studies. Even purpose-built legal AI platforms from major providers have been found to produce incorrect information in a meaningful percentage of queries. The lawyers getting into trouble are not the ones using AI. They are the ones using the wrong AI and skipping verification.

What You Need to Know Before Reading
  • AI can help with legal research but every output must be verified
  • General AI tools carry high hallucination risk for legal citations
  • Purpose-built legal AI is safer but not error-free
  • Verification is mandatory regardless of which tool you use
  • The duty of competence under ABA Formal Opinion 512 applies to AI tools

The Problem Is Not AI. It Is Which AI and How It Is Used.

Since Mata v. Avianca in 2023, the legal profession has been trying to answer one question: can you trust AI for legal research?

Three years later the answer is still getting confused. Some lawyers ban it entirely. Others use ChatGPT for research without checking citations. Both approaches are wrong, and both are costing lawyers money, sanctions, and in some cases their ability to practice.

Hundreds of documented cases of AI-generated hallucinations in court filings have now been reported across US courts. The Charlotin AI Hallucination Cases Database, which tracks legal decisions where courts found AI-generated hallucinated content, documents the rapid acceleration from a handful of cases in 2023 to hundreds of identified instances tracked globally.

Newer legal AI tools built on retrieval-based architectures are specifically designed to address this risk. The distinction between how those tools work versus how general AI works is the most important thing a practicing attorney can understand right now.

What the Sanctions Data Actually Shows

  • Mata v. Avianca (S.D.N.Y. 2023):

The case that started it all. Two New York attorneys submitted a brief containing six ChatGPT-generated case citations. None of them existed. The court fined the firm $5,000 and ordered the attorneys to personally notify every judge whose name was falsely attached to a fabricated opinion.

  • Lacey v. State Farm Gen. Ins. Co. (C.D. Cal. 2025):

Attorneys from K&L Gates and Ellis George were fined $31,100 for submitting briefs with non-existent or incorrect citations.

  • P.R. Soccer League NFP Corp. v. Federación Puertorriqueña de Futbol (D.P.R. 2025):

More than $50,000 in attorney’s fees was awarded to Paul Weiss after opposing counsel filed motions with made-up content.

  • MyPillow CEO Mike Lindell defamation case (D. Colo. 2025):

Per NPR’s coverage, a federal judge ordered two attorneys to pay $3,000 each after they used AI to prepare a court filing filled with more than two dozen mistakes including hallucinated cases

  • Johnson v. Dunn (N.D. Ala. 2025):

The court disqualified defendants’ attorneys from the case and referred the matter to the state bar, noting: “If fines and public embarrassment were effective deterrents, there would not be so many cases to cite.”

For a full review of 2025 sanctions patterns, see Sterne Kessler's AI Hallucinations in Court Filings: A 2025 Review.

Task General AI (ChatGPT, Gemini) Purpose-built Legal AI (NeXa)
Structuring a legal argument Safe as a starting point Safe with verification
Drafting an email or memo Safe with review Safe with review
Finding relevant case law High risk — hallucinations frequent Lower risk — retrieves from primary sources
Case citations for filings Unsafe — do not use Use only with direct source verification
Summarising an uploaded document Moderate risk Lower risk
Confidential client documents Unsafe in open systems Safe in closed HIPAA-compliant systems

The Mistakes Most Lawyers Are Making

Mistake 1: Using ChatGPT or general AI for legal citation research

ChatGPT was not built for legal research. It generates text by predicting what words come next based on patterns in training data. It does not retrieve from legal databases. Studies have shown hallucination rates on legal queries often exceeding 50% for general-purpose AI tools.

Mistake 2: Assuming legal AI tools are hallucination-free

Even purpose-built legal AI tools are not reliable without verification. The Stanford HAI study on legal AI hallucinations found that leading legal research platforms produced incorrect information in a significant percentage of queries.

Mistake 3: Trusting the citation exists without checking what it says

A citation might be real but the source cited may be irrelevant or may actually support the opposite conclusion from what was argued.

Mistake 4: Letting AI generate the final draft without attorney review

Per Legal Cheek’s Mentorship Gap report, 72% of legal professionals identified deep legal reasoning as the biggest skills gap among junior lawyers.

Mistake 5: Not knowing your jurisdiction’s AI disclosure rules

A Colorado attorney received a 90-day suspension after admitting he failed to verify ChatGPT-generated citations. Check your court’s standing orders before filing anything that involved AI research.

What ABA Formal Opinion 512 Actually Requires

ABA Formal Opinion 512, issued July 2024, is the ethical framework every US attorney needs to understand before using AI for legal research.

The opinion covers three points: the duty of competence under Model Rule 1.1 applies to AI tools; the duty of confidentiality under Model Rule 1.6 requires verifying client data is not used to train the AI; and the duty of candour under Model Rule 3.3 requires that citations submitted to a court are accurate.

For practical guidance, see the ABA Law Technology Today guide on using AI wisely in litigation workflows.

The opinion does not ban AI. It makes clear that ignorance of the tool’s limitations is not a defence.

  • General-purpose AI (ChatGPT, Claude, Gemini, Perplexity)

These tools generate text based on training data. They do not retrieve from live legal databases. They produce plausible-sounding citations that may or may not exist.

  • Purpose-built legal AI with RAG architecture

Tools like NeXa use retrieval-augmented generation, which means the AI retrieves from primary legal databases before generating a response. Every output links to the source document. The hallucination rate is dramatically lower because the AI is finding real cases before generating text about them.

For litigators doing medical record review alongside legal research, see our guide on how to build a medical chronology for a PI case.

Step 1: Use AI to identify the research direction, not to generate the final citations

Ask the AI to identify the key legal issues, relevant doctrines, and jurisdiction-specific considerations. The goal is a framework, not citations.

Step 2: Use a legal-specific AI tool connected to primary sources

If you use AI to find cases, use a tool that retrieves from primary legal databases and links every citation to the source document.

Step 3: Verify every citation yourself before it goes into a filing

Pull the actual case. Read the relevant holding. Confirm it says what you think it says. Confirm it is still good law.

Step 4: Understand your court’s AI disclosure requirements

Check whether your jurisdiction requires disclosure of AI use in filings. Ropes and Gray has mapped more than 200 standing orders and local rules on AI disclosure.

Step 5: Keep client data out of open AI systems

Use only closed, HIPAA-compliant legal AI tools with documented zero data retention policies.

See NexLaw's security and compliance page for the full credentials.

The same verification discipline applies to deposition preparation. See our guide on how to prepare for a deposition in a personal injury case using AI.

How NeXa Handles the Hallucination Problem

NeXa is built on retrieval-augmented generation connected to primary US legal databases covering all 50 states and federal circuits. Every research output links directly to the source case, statute, or regulation. The citation exists before the response is generated.

You can verify every citation in one click. The source is right there. You do not have to search separately to confirm it exists.

NeXa also supports deposition preparation, deep research across jurisdictions, and argument building — keeping the full litigation research workflow in one platform.

FAQ

Frequently Asked Questions

Explore answers to frequently asked questions about Nexlaw

Can AI be used for legal research?

Yes, with the right tools and a verification workflow. Purpose-built legal AI tools connected to primary databases reduce hallucination risk significantly compared to general AI. But all AI outputs require attorney verification before going into court filings.

What is an AI hallucination in legal research?

An AI hallucination occurs when the AI generates a case citation, statute, or legal principle that either does not exist or does not say what the AI claims it says.

What happens if a lawyer submits AI-hallucinated citations to a court?

Courts are imposing escalating sanctions including fines, disqualification from cases, and referral to state bars. The Charlotin database tracks hundreds of such cases globally.

Is ChatGPT safe for legal research?

Not for citation research. ChatGPT generates plausible-looking citations based on text patterns, not from legal databases. Multiple attorneys have been sanctioned for submitting ChatGPT-generated citations that did not exist.

What does ABA Formal Opinion 512 say about AI legal research?

ABA Formal Opinion 512 does not ban AI. It clarifies that existing professional duties apply. The duty of competence, confidentiality, and candour all apply to AI-generated outputs. Attorneys cannot rely on AI-generated citations without independent verification.

What is the difference between ChatGPT and legal AI tools like NeXa?

ChatGPT generates text based on training data patterns and does not retrieve from legal databases. NeXa uses retrieval-augmented generation — it searches primary legal databases before generating a response. Every output links to the primary source.

How do I verify AI-generated legal citations?

Pull the actual case in a primary source database. Read the relevant section. Confirm the case exists, says what the AI claimed, is from the jurisdiction cited, and is still good law.

Enjoying this post?

Subscribe to our newsletter to get the latest updates and insights.

© 2026 NEXLAW INC.

AI Legal Assistant | All Rights Reserved.

ISO 27001 certified information security management system ISO 27001 Certified
GDPR compliant data protection and privacy standards GDPR Compliant
HIPAA compliant security for sensitive legal and health data HIPAA Compliant
SOC 2 Type II certified security and compliance controls Type II Certified

NexLaw is a SOC 2 Type II compliant platform utilizing AES-256 encryption. Our zero-data retention policy for enterprise users ensures that your work product remains privileged and is never used to train our models.

NEXLAW AI