Three Morgan and Morgan attorneys submitted a motion in limine in 2024 with nine case citations. Eight of them did not exist. All eight had been generated by an AI tool that produced authoritative-looking citations with correct formatting, plausible case names, and realistic reporter references. None of them appeared in any legal database. The attorneys had reviewed the brief. They had not verified the sources.
That is the distinction courts are now enforcing. Not whether you reviewed your work. Whether you verified it.
ABA Formal Opinion 512, issued in July 2024, confirms that Model Rule 1.1 requires lawyers using AI to exercise independent verification of AI output before relying on it professionally. Model Rule 3.3 requires that every factual and legal assertion submitted to a court be verifiable. As of early 2026, 75 percent of US lawyers are using AI in their practice, but only 25 percent have received formal training on the ethical requirements that govern that use, according to a survey published in NexLaw’s ethical AI litigator guide for 2026. That gap between adoption and compliance is where sanctions happen.
This checklist covers every step of the pre-filing verification process, organized by the type of error most likely to expose you to sanctions, malpractice exposure, or a reply brief that turns your own filing against you.
See how NexLaw reviews a filing before it reaches a judge. Book a live walkthrough
See a live review of a real filing. No commitment required.
Why Standard Review Misses AI Errors
Standard review, reading through a document for logical flow and checking citations by memory, was designed for a work product written entirely by a human who knew what the source said. It was not designed for documents where a portion of the reasoning, phrasing, or research was generated by a system that predicts text rather than retrieves verified facts.
AI errors do not look like errors. They look like work products. Here is what they actually look like in practice.
- A citation to a real-sounding case with a correct reporter format and a plausible year that does not exist in any database.
- A quote from a real case where one word has been changed, shifting the meaning of the holding in a way that serves the argument but is factually inaccurate.
- A summary of a case that accurately describes the court’s general reasoning but omits the limiting principle that distinguishes it from the facts at hand.
- An argument section that makes a legal conclusion confidently and with apparent logical structure but never identifies the rule that produces the conclusion.
These errors do not trigger spell check. They do not trigger a Bluebook formatter. They survive a read-through because the surrounding text is coherent. They surface in a reply brief, a sanctions motion, or a bar complaint.
A peer-reviewed Stanford HAI study published in the Journal of Empirical Legal Studies found that legal-specific AI platforms hallucinate at rates between 17 and 33 percent of queries. General-purpose AI tools hallucinate case citations in 30 to 45 percent of legal research responses according to Stanford CodeX benchmarking. Over 700 court cases now involve AI-generated hallucinations or fabricated content according to legal analytics tracking by LexisNexis and Bloomberg Law.
For a full breakdown of the documented sanction cases and what courts found in each filing, see AI hallucination sanctions 2026.
The Three Categories of Pre-Filing Risk
Not all pre-filing errors carry the same consequence. Understanding which category you are checking for changes how you verify.
Fabrication risk
The case does not exist. The statute was never passed. The quote was never written. Detectable by checking citations against a verified legal database. The Mata v. Avianca attorneys paid $5,000 in fines and had to notify their clients because they did not run this check.
Mischaracterization risk
The case exists and is formatted correctly, but it does not say what the brief claims it says. The holding applies to different facts. A limiting principle has been omitted. The case was later reversed or distinguished. Not detectable by a citation checker. Requires reading the actual decision.
Argument gap risk
The citation is real and accurately characterized, but the overall argument has sections with no supporting authority, factual assertions with no exhibit reference, or legal conclusions with no identified rule. This is the category opposing counsel finds in a reply brief. It is also the category AI-generated work is most prone to creating because language models produce confident prose without grounding in verified authority.
For a detailed explanation of what opposing counsel looks for when reading your filing, see pre-filing risk analysis: what gets used against you before you even know it.
The Pre-Filing Checklist
Mark Every AI-Assisted Section Before Reviewing
- What to check: Which sections of the document involved AI assistance in any form, including AI-drafted text, AI-suggested arguments, AI-provided research incorporated into the document, and AI summaries of cases or statutes.
- How to check: Go through the document and annotate or highlight any section where AI output was used. If you use a tool that logs its outputs, pull the log.
- What happens if missed: You apply the same level of review to every section regardless of risk level. Sections with higher AI involvement need more rigorous verification, and without marking them first you cannot prioritize correctly. This is also the information you need to make accurate AI disclosure to courts with standing orders requiring it.
Verify Every Case Citation Against Its Primary Source
- What to check: That every cited case exists, that the citation format is accurate, that the case actually holds what the brief claims it holds, and that the case remains good law.
- How to check: Run every citation through a verified legal database. If you use NexLaw NeXa, every citation is already linked directly to its source document and you can click through to verify immediately. If you used a general-purpose AI tool, verify every citation independently in Westlaw, Lexis, or CourtListener. Read the actual decision, not the AI's summary of it. Run every case you rely on as controlling authority through Shepard's Citations or KeyCite to confirm its subsequent treatment.
- What happens if missed: The Morgan and Morgan attorneys missed this step. Eight of nine citations in their filing did not exist. The Sixth Circuit in Whiting v. City of Athens sanctioned attorneys who submitted 24 fabricated citations, ordering them to reimburse opposing counsel's fees and pay double costs.
If you quote directly from a case, find the original opinion and verify the quote word for word. AI systems routinely alter quoted language by one or two words in ways that shift meaning and survive casual review.
Check Every Factual Assertion Against Its Supporting Exhibit
- What to check: That every factual claim in the document has an exhibit or record citation that supports it, that the exhibit actually establishes the fact claimed, and that the timeline in the document matches the timeline in the exhibits.
- How to check: Read each factual assertion and ask: what is the source for this? If there is an exhibit reference, open the exhibit and confirm it says what the brief claims. Pay particular attention to timeline claims, which are among the most common places where a brief contradicts its own exhibits.
- What happens if missed: Narrative contradictions between a brief and its own exhibits are among the most reliably exploited vulnerabilities at the motion to dismiss stage. A factual claim with no supporting document in the record gives opposing counsel a direct argument that your submission lacks evidentiary foundation.
Identify and Address Every Conclusory Statement
- What to check: Whether each argument section states a rule, sources the rule to a specific case or statute, explicitly applies the rule to your facts, and reaches a conclusion that follows logically from that application.
- How to check: For each argument heading in your filing, identify the sentence that states the legal rule. If you cannot find it, the section is conclusory. If the rule is stated but the application is missing, that is also a gap. AI-generated arguments frequently state conclusions fluently without identifying the rule that produces them.
- What happens if missed: Courts regularly dismiss arguments on conclusory grounds without reaching the merits. An argument that tells the court what to conclude without explaining why under the applicable rule gives opposing counsel a structural objection that has nothing to do with the facts.
For the specific types of argument vulnerabilities most commonly found in AI-assisted briefs, see AI citation errors in legal research.
Run a Full-Document Citation Scan
- What to check: That no citations slipped through the manual review in Step 2.
- How to check: Upload the completed document to CiteCheck AI or JurisCheck. CiteCheck extracts all case citations, cross-references each against CourtListener, and produces a color-coded report in under two minutes. JurisCheck validates citations in Bluebook format and flags formatting inconsistencies that may indicate a corrupted or hallucinated citation.
- What happens if missed: Manual review can miss citations embedded in footnotes, parenthetical explanations, or block quotes. A full-document scan is the safety net for the manual process. These tools check existence and format only. They do not replace Steps 2 and 3.
For a full comparison of every available citation verification tool and what each one checks, see best AI tools that verify legal citations in 2026.
Verify Jurisdiction-Specific Court Requirements
- What to check: Whether the specific court where you are filing has a standing order addressing AI use, disclosure requirements, or certification requirements for AI-generated content.
- How to check: Check the court's local rules and the judge's specific standing orders on the court's public website. Call the clerk's office if the standing order language is ambiguous.
- What happens if missed: Over 40 federal district courts now have standing orders or local rules addressing AI-generated filings. Filing without complying with a court-specific AI disclosure requirement is itself a procedural violation independent of whether your substantive citations are accurate.
Document Your Verification Process
- What to check: That you have a record of what verification steps were completed, who completed them, which databases were used, and when.
- How to check: Create a brief internal record immediately after completing verification. A dated email to your own file, a note in your case management system, or a checklist signed and dated by the reviewing attorney.
- What happens if missed: If a court challenges a filing and asks how you verified AI-generated content, your documented process is your evidence that you met the competence standard under ABA Formal Opinion 512. Without documentation, you are relying on your own testimony about a process you completed weeks or months ago.
NexLaw flags argument gaps, unsupported assertions, and missing authority tied to specific sections of your document.
See what it finds on a real case before you run it on your own.
Or book a demo to see NeXa in action on a real case.
The Gap Between a Checklist and a System
A checklist describes what to do. A system does it consistently, across every filing, regardless of how many active matters are on your desk or how close the deadline is.
The checklist above is thorough. Completed manually on a complex brief, it adds time to every filing. That time compounds. Across 15 active matters, the cumulative verification load becomes the thing that gets skipped when the deadline is at 5pm and the brief is 80 pages.
That is the point in the workflow where errors enter filings. Not because the attorney is careless. Because the verification process depends on human consistency under time pressure, and that consistency is exactly what breaks down.
NexLaw’s NeXa runs the equivalent of Steps 2, 3, and 4 automatically. Upload any document before filing. NeXa identifies argument gaps, flags factual assertions with no supporting authority, and surfaces narrative inconsistencies, all tied to specific sections of the document so you address them directly rather than re-reading everything from the start.
The difference between NeXa and a standalone citation checker is architectural. NeXa retrieves from verified legal databases and links every citation to its primary source document. It does not generate citations from memory. If a case exists and is relevant, NeXa cites it and links you directly to the decision. If it does not exist, it does not appear. For case timelines and chronology review, ChronoVault builds a verified evidence timeline automatically from your uploaded case files, catching timeline contradictions across exhibits that Step 3 in this checklist requires you to find manually. For trial preparation and final pre-filing review, CasePrep and the Courtroom Assistant extend verification into the courtroom itself.
Manual verification remains necessary for the judgment calls: reading a case to confirm it supports your specific argument, assessing whether a factual claim requires additional evidentiary support, deciding whether an argument section is sufficiently developed. No tool replaces that. What NeXa does is ensure that by the time you reach those judgment calls, the mechanical errors have already been caught.
The ABA Standard for AI-Assisted Work
ABA Formal Opinion 512 is the baseline standard every US litigator using AI must understand. Issued in July 2024, it confirms that existing Model Rules apply fully to AI-assisted legal work.
Model Rule 1.1 (Competence) requires that lawyers understand the limitations of the technologies they use, including hallucination risk, before using them to produce work product. The ABA states that competent use of AI requires “an appropriate degree of independent verification or review” of AI output.
Model Rule 3.3 (Candor Toward the Tribunal) requires that every factual and legal assertion submitted to a court be verifiable. The ABA specifically notes that the willful submission of false or unverified material prepared using AI to a court would represent a clear violation of a lawyer’s duty of candor.
The ABA’s language on oversight is direct: “Lawyers who rely on generative AI for research, drafting, communication, and client intake risk many of the same perils as those who have relied on inexperienced or overconfident nonlawyer assistants.”
As of early 2026, over 35 state bar associations have issued guidance on AI use. California’s bar published a Practical Guide emphasizing that competence requires understanding how large language models work before using them in practice. Florida’s Opinion 24-1 requires disclosure when AI use impacts client billing. Pennsylvania’s Joint Formal Opinion 2024-200 warns that AI use does not replace the duty to verify all case law references independently.
For the full breakdown of state-level AI ethics requirements and how they apply to US litigators, see ethical AI litigator guide 2026 and can lawyers use ChatGPT without getting sanctioned in 2026.
Ready to run the full analysis on your next filing before it goes out?
NeXa verifies every citation against primary sources and flags argument gaps before your document reaches a judge.
No credit card. Full access from day one. Cancel any time.
FAQ
Frequently Asked Questions
Explore answers to frequently asked questions about Nexlaw
What do courts require for AI-assisted filings in 2026?
Requirements vary by court. Over 40 federal district courts have standing orders addressing AI use in filings. Some require explicit disclosure that AI was used. Others require certification that all AI-generated content was verified by a human before filing. Some require identification of which specific sections were AI-assisted. Check the local rules and the judge's standing orders for every court where you file. ABA Formal Opinion 512 is the governing standard where no specific court rule exists.
Are lawyers liable if AI generates a fake citation in their brief?
Yes. Courts have held that the duty to verify citations is non-delegable regardless of source. The Mata v. Avianca court imposed a $5,000 fine on attorneys who submitted fabricated AI-generated citations. The Sixth Circuit in Whiting v. City of Athens ordered attorneys to reimburse opposing counsel's fees and pay double costs. A Colorado attorney was suspended. Courts have consistently treated the failure to verify AI-generated citations as a violation of Model Rule 3.3.
How do you verify AI-generated citations before filing?
The minimum process has four steps. First, run every citation through a verified legal database to confirm it exists. Second, read the actual decision to confirm it says what the brief claims it says. Third, run every case you rely on as controlling authority through Shepard's Citations or KeyCite to confirm it has not been overruled. Fourth, run the completed document through a standalone citation scanner such as CiteCheck AI or JurisCheck to catch any citations that slipped through manual review.
What is the difference between argument gap risk and citation risk?
Citation risk is the risk that a cited case does not exist or does not say what the brief claims. Argument gap risk is the risk that a section of the brief makes a legal conclusion without identifying the rule that produces it, or makes a factual assertion without a supporting exhibit. Both are exploited by opposing counsel. Citation errors are caught by verification tools. Argument gaps require a human or a document analysis tool such as NexLaw's Document Insights to identify.
What happens if you file a brief with AI-generated fake citations?
Sanctions range from monetary fines to referrals for bar discipline to suspension. Documented sanctions in 2023 to 2026 include fines ranging from $2,500 to $31,100, orders to reimburse opposing counsel's fees, mandatory client notification, public reprimand, and in at least one case, suspension. For the complete court-by-court breakdown, see AI hallucination sanctions 2026.
Can using NexLaw replace the verification steps in this checklist?
No. NexLaw's Document Insights automates the argument gap and unsupported assertion checks that are most time-consuming to do manually. NeXa's retrieval-based architecture eliminates fabrication risk at the research stage by only surfacing citations it can link directly to verified source documents. But the judgment calls in this checklist require attorney review. NexLaw reduces verification time. It does not replace attorney judgment.


