Lawyers are getting sanctioned for AI-generated citations. Not for using AI, but for failing to verify it properly.
One pattern keeps showing up: using AI to check AI and assuming that counts as verification. Courts are rejecting that entirely. Independent means outside the AI system. That phrase carries legal weight, and it is now the standard across multiple circuits, the ABA, and over thirty-five state bar associations.
Why the AI-to-AI Trap Fails
The logic feels reasonable. If an AI research tool generates a citation, and a second AI tool reviews it and flags nothing wrong, something was done. It was not verification.
If AI creates the citation and AI checks it, the error cycle never breaks. AI tools do not verify against authoritative sources the way a trained attorney reading an actual case does. They generate probabilistic text. When one AI reviews another AI’s output, it is pattern-matching against training data, not cross-referencing a live legal database. A plausible-sounding fabricated citation will often pass an AI review because nothing about the format triggers a flag.
One lawyer described telling ChatGPT explicitly that hallucinations were unacceptable, then instructing it to verify its own citations. It generated the same types of hallucinated citations anyway. The instruction does not change the architecture.
ABA Formal Opinion 512, issued in July 2024, confirmed this directly: attorneys using generative AI must not rely on AI outputs without independent verification or review. Independent means outside the AI system. It means primary sources. It means a licensed attorney personally reading the case.
What the Data Shows About Legal AI Tools
Before accepting what any legal AI vendor claims about its hallucination rate, read the independent study
Researchers at Stanford’s RegLab and Human-Centered AI institute published the most rigorous independent evaluation of legal AI research tools available, later published in the Journal of Empirical Legal Studies. The finding: even the best-performing legal-specific AI tools produced incorrect or misgrounded responses on more than 17% of queries. One major platform hallucinated at nearly twice that rate, approximately 33% of the time.
To translate that concretely: if you use one of these tools to look up three cases in a single research session, the statistical expectation is that at least one response contains an error or cites a source that does not support the proposition it is attached to.
The study also identified a category of hallucination more dangerous than a completely invented case: a citation that exists but does not say what the AI claims it says. This is harder to catch. The case name looks legitimate, the citation format is correct, and nothing triggers automated flags. The attorney who does not personally read the case will file the brief without knowing the citation misrepresents the law.
This failure mode is present across legal-specific tools and general-purpose tools alike. It is not caught by AI-generated citation checkers because those tools verify existence, not accurate characterization of holdings. This is the type of error most likely to survive an AI-based review and still get you sanctioned.
If your current research workflow creates citation risk you cannot see, check the
Start a 3-day free trial and see how NeXa builds source-linked verification into every research output
Or book a demo to see NeXa in action on a real case.
Three Cases That Define the Standard
The case that started the judicial reckoning. Two attorneys submitted a brief containing six completely fabricated citations generated by ChatGPT. When the court questioned the citations, the attorneys asked ChatGPT to confirm the cases were real. ChatGPT confirmed they were. The court imposed $5,000 in sanctions and described the attorneys as having abandoned their professional responsibilities. Using the same tool that generated the error to confirm the error is correct is not verification.
The Sixth Circuit Court of Appeals imposed $15,000 in sanctions on each of two attorneys for a brief containing over two dozen fake citations. The court's language applies to every attorney using AI: no brief, pleading, motion, or any other paper filed in any court should contain any citations, whether provided by generative AI or any other source, that a lawyer has not personally read and verified. The verification standard is not triggered by AI use. It applies to every citation regardless of origin.
This is the case that most directly addresses the AI-checking-AI trap. The U.S. District Court for the District of Colorado sanctioned two attorneys $3,000 each for submitting a brief with nearly thirty defective citations. The attorneys argued they had conducted a citation check after using AI. The court dismissed this. Repeating citation errors across multiple cases demonstrated a practice of using AI to conduct legal research without verifying the outputs. A citation check that does not involve reading the cited source is not a citation check.
The Supervision Problem
Sanctions are not limited to the attorney who generated the AI output. In the Kansas federal case that resulted in $12,000 in collective fines, documented by the JD Journal, four attorneys received fines: $5,000 on the attorney who used AI to generate the citations, $3,000 each on two attorneys who reviewed and signed the filing without reading the citations, and $1,000 on local counsel who failed to identify the errors before submission.
ABA Formal Opinion 512 is clear on Model Rule 5.1 supervisory obligations. Signing a document containing citations you have not read is a sanctionable act, not merely an oversight. At solo and small firms, where there is no supervision layer and the drafting attorney is also the filing attorney, every citation in every brief carries the full weight of personal professional responsibility.
The same issue is covered in depth in NexLaw’s blog on AI citation errors in legal research and in the AI hallucination sanctions 2026 guide, which covers the court-by-court breakdown of what each circuit is requiring.
What Independent Verification Actually Requires
Every cited case must be opened and read by the attorney of record. Not summarized, not scanned for a relevant quote. The holding, the facts, and the procedural posture must confirm the proposition for which the case is cited.
Every quoted passage must be traced to the original source document. The Stanford study identified a hallucination type where AI attributes a real quote to the wrong case, or fabricates a quote within a real case. The citation passes an existence check but the language was never in the opinion.
Every case must be Shepardized or KeyCited to confirm it has not been overruled, distinguished, or limited. AI tools with knowledge cutoffs will not catch recent developments.
Verification must be completed by a licensed attorney. The California Court of Appeal's opinion in Noland v. Land of the Free, L.P., California's first published opinion on AI-fabricated citations, stated this plainly: attorneys have a non-delegable duty to personally read and verify every authority they cite. That duty cannot be outsourced to AI, to staff, or to a law clerk without attorney review.
For attorneys who want to run through the complete pre-filing checklist, NexLaw’s hallucination risk blog covers the eight steps practicing litigators should run before any AI-assisted brief reaches a court.
What Built-In Verification Looks Like
The distinction courts are drawing is between AI tools that generate citations for attorneys to check, and AI tools where every output is source-linked to the primary document before it reaches the attorney.
When every research answer arrives with a hyperlink to the controlling authority, when every cited case is traceable to the source record, the attorney is not being asked to take a separate verification step. The verification is embedded in the output. The attorney’s job shifts from hunting down whether citations are real to reading the source material that is already in front of them.
This is the architecture NeXa is built around. Every research answer links directly to verified primary US legal databases before it is presented. Every claim is source-linked. The attorney reviewing the output is reading the same thing a judge would read if they pulled the case.
For attorneys managing high case volumes, NexLaw’s blog on why personal injury law firms struggle to scale covers the workflow gap in detail. And for attorneys evaluating pre-filing exposure, the pre-filing risk analysis blog walks through what gets missed when volume exceeds review time.
The Standard Courts Are Converging On
Across the Sixth Circuit, Fifth Circuit, Colorado, California, and Kansas, the judicial statements on verification are converging on a single framework. Every citation must be personally read and verified by the attorney of record. Not confirmed by AI. Not checked for existence by a citation tool. Read and verified, meaning the attorney has opened the source, confirmed it says what the brief claims it says, and can certify that under Rule 11 if asked.
The Fifth Circuit stated it plainly in early 2026: if it were ever acceptable to plead ignorance of the risks of using AI without verifying output, it is certainly no longer so.
Protect Your Practice Before a Judge Does It For You
For litigators who want to understand exactly where their AI workflow creates exposure:
Book a Demo and we can walk you through the entire process
FAQ
Frequently Asked Questions
Explore answers to frequently asked questions about Nexlaw
Does using a second AI tool to check citations count as verification under Rule 11?
No. Courts including the Colorado federal court and the Sixth Circuit have held that verification requires independent confirmation against primary sources by a licensed attorney. Using one AI to check another does not meet this standard.
What hallucination rate should I expect from legal-specific AI tools?
According to the Stanford RegLab and HAI study, even purpose-built legal AI tools produced incorrect or misgrounded responses on between 17% and 33% of queries tested. Both figures apply to tools specifically marketed for legal research with database-backed architectures, not general-purpose tools.
Can an attorney be sanctioned for citations they did not personally write?
Yes. Courts have sanctioned attorneys who reviewed and signed filings containing AI-generated citations without reading those citations. Model Rule 5.1 supervisory obligations extend this exposure to supervising attorneys as well.
What is the difference between a fake citation and a misgrounded citation?
A fake citation refers to a case that does not exist. A misgrounded citation refers to a case that exists but does not support the legal proposition the AI attached to it. Misgrounded citations are harder to detect through automated checkers that verify existence but do not confirm accurate characterization of holdings. They are equally sanctionable.
What does ABA Formal Opinion 512 require on AI verification?
ABA Formal Opinion 512 requires that attorneys not rely on AI outputs without independent verification or review, grounded in Model Rule 1.1 (competence) and Model Rule 3.3 (candor toward the tribunal). Citations submitted to courts require the highest level of verification.


