Yes, but only for lower-risk tasks like drafting structure, summarizing material you already understand, and brainstorming. Lawyers get sanctioned when they use ChatGPT as a legal research or citation source and file unverified output in court. The safe workflow is simple: use ChatGPT for drafting support, use verified legal databases for authority, and independently verify every source before filing.
The Question Every Lawyer Is Asking Right Now
The number of AI hallucination cases appearing in US courts has grown sharply since 2023, with legal researcher Damien Charlotin’s AI Hallucination Cases Database tracking a rapidly expanding body of judicial decisions and major outlets continuing to report new sanctions through 2025 and 2026. The problem is not whether lawyers use AI. It is whether they use it with a verification workflow.
Adoption is real and growing. The 2025 ABA Legal Industry Report found that 31% of legal professionals personally used generative AI at work, up from 27% the year before. The 2024 ABA TechReport found 30% of attorneys have offices currently using AI-based tools, with ChatGPT cited as the most widely used platform. Personal injury attorneys led all practice areas in individual AI adoption at 37%, according to the 2025 report.
The case that started the sanctions conversation was Mata v. Avianca, Inc. (S.D.N.Y. 2023). Attorney Steven Schwartz used ChatGPT to research an aviation personal injury claim. ChatGPT returned six case citations. All six were fabricated. Schwartz submitted them to federal court without verifying a single one against a primary legal database. When opposing counsel could not locate the cases, Schwartz doubled down and submitted what he believed were copies of the decisions. Judge Kevin Castel sanctioned Schwartz, his colleague Peter LoDuca, and their firm $5,000. In his testimony, Schwartz described ChatGPT as a “super search engine.” That phrase has since become the defining misunderstanding of the AI sanctions era.
That was 2023. Here is what has happened since.
From One Case to Hundreds: How Fast This Has Accelerated
The Mata case was treated as an isolated incident. It was not.
For a running record of AI sanction cases and their outcomes, see the NexLaw 2026 AI sanctions tracker and the ByoPlanet case breakdown.
Can Lawyers Use ChatGPT for Legal Research?
This is the question at the center of every sanctions case documented so far. The answer requires understanding what ChatGPT actually is, not what it appears to be.
Schwartz’s description of ChatGPT as a super search engine captures the misunderstanding precisely. ChatGPT is not a search engine. It does not retrieve information from legal databases. It does not have access to Westlaw, LexisNexis, or any primary legal source. What it does is generate the next most probable word based on patterns learned from its training data. It is a prediction engine.
When you ask ChatGPT to find cases supporting a proposition about personal injury liability, it does not search for real cases. It generates text that looks like what a response to that query would look like. Case citations have a recognizable format: party names, reporter abbreviation, volume number, page, court, year. ChatGPT generates text in that format. The specific case it produces may or may not correspond to a case that actually exists. ChatGPT cannot tell you which.
Leading legal-specific AI tools hallucinate in at least 1 in 6 queries — still significant, but far better than general-purpose chatbots.
General-purpose chatbots like ChatGPT hallucinated between 58% and 82% of the time on legal queries in an earlier Stanford study.
Stanford HAI researchers tested leading legal AI tools and found they hallucinate in at least 1 in 6 queries. General-purpose chatbots like ChatGPT, by contrast, hallucinated between 58% and 82% of the time on legal queries in an earlier Stanford study. The Boston Bar Association addressed this directly in March 2026, noting that ChatGPT is designed to please the user, whether or not it can find real cases to support the argument it is asked to make.
There is one additional failure mode that appears consistently in sanctions cases. When Schwartz asked ChatGPT whether the Varghese case was real, ChatGPT confirmed that it was real and that it could be found on Westlaw and LexisNexis. It could not. Asking ChatGPT to verify its own citations does not work. It will generate a plausible-sounding confirmation because that is the most probable response to that prompt.
The real problem is not negligence. It is speed. Lawyers use ChatGPT because they are busy. The trouble starts when a time-saving drafting tool quietly becomes a research source, and nobody catches the shift before the filing goes out. For more context on how hallucinations reach court filings, see AI hallucinations hitting US courts: when AI generates fake cases and AI hallucination legal risk for US litigators.
What Does ABA Formal Opinion 512 Require?
The ABA issued Formal Opinion 512 in July 2024, the first formal ethics guidance specifically addressing generative AI. It does not prohibit using AI. It establishes that existing Model Rules apply to AI use and that ignorance of AI limitations is not a defense.
Competence
Lawyers must understand the capabilities and limitations of AI tools they choose to use. The duty of competence now includes technological competence. You do not need to become an AI engineer. You do need to understand that ChatGPT generates probabilistic text, not verified legal research, and that its outputs require independent verification before use in court filings.
Confidentiality
Client information entered into the free or consumer versions of ChatGPT is not protected. OpenAI's consumer products may use inputs for training. The opinion requires informed client consent before using self-learning AI tools with client information. Boilerplate provisions in engagement letters are not sufficient.
Meritorious Claims
AI-generated hallucinated citations can form the basis of frivolous claims and arguments, triggering Rule 3.1 exposure in addition to candor violations.
Candor toward the Tribunal
Filing fabricated citations is a candor violation. The attorney who signs the filing is the one who has violated the rule, not the tool that generated the citation. This has been affirmed in every major sanction case.
FRCP
All filings must be warranted by existing law or a nonfrivolous argument. The signing attorney is responsible for every matter in the pleading regardless of who or what authored the first draft. In Johnson v. Dunn, the court declined to accept as an excuse that the hallucinated citation was inserted by a supervisor rather than by the signing attorney.
The opinion’s sharpest language is worth noting directly: “Lawyers’ uncritical reliance on content created by a GAI tool is risky and almost certainly malpractice.” Formal Opinion 512 was published in July 2024. Every lawyer practicing today is on notice that it exists.
What Tasks Can Lawyers Safely Use ChatGPT For?
Courts and the ABA have been consistent: using AI is not the problem. The problem is using AI for tasks where unverified output goes into a court filing.
Lower risk when used with human review and confidentiality controls:
- Drafting structural outlines for briefs and motions, provided the substance comes from your own verified research
- Reorganizing an argument you have already developed, improving headings, or producing a cleaner draft of a section you have written
- Summarizing documents you have already read yourself, provided you protect confidential client information and independently review the summary for accuracy
- Generating ideas for argument angles, counterarguments, or research directions, treating those ideas as starting points that require verification, not conclusions
- Drafting client communications and internal memos that do not go to the court, with appropriate confidentiality controls applied to the tool you use
Where Sanctions Happen
- Using ChatGPT to produce case citations for court filings without verifying each citation in a primary legal database before filing
- Using ChatGPT to find case law supporting a proposition you need to support in court
- Asking ChatGPT to verify that a case says what you think it says
- Having a paralegal use ChatGPT to draft a brief, then signing it without independently reading and verifying the citations
The ByoPlanet sanctions made the paralegal problem explicit. Attorney Paul’s paralegal was drafting submissions and Paul was tweaking them without always reviewing the citations. Judge Leibowitz found this constituted the unauthorized practice of law. The signing attorney’s obligation to verify cannot be delegated.
A useful framing from the 12AM Agency guide to ChatGPT for lawyers: treat ChatGPT like a first-year associate. You would not file a brief written by a junior without reviewing it. Every citation, every proposition of law, every claim in a court filing is yours, regardless of who or what drafted the first version.
Check whether your AI research workflow verifies every citation before filing.
See how NeXa links every legal answer to the primary source.
What Happens If a Lawyer Files Fake AI Citations?
Documented consequences from 2023 to 2026:
Monetary Sanctions
$2,000 to $86,000 in a single case
Monetary Sanctions
$2,000 to $86,000 in a single case
Filing Requirements
Required to attach sanction order to all future filings for 2 years
Bar Referrals
Multiple 2025–2026 cases, including ByoPlanet to Florida Bar
Suspension
Texas attorney Zachariah Crabill, suspended for citing nonexistent cases in custody motion
Pro Hac Vice Revoked
At least one 2025 case resulted in revocation of pro hac vice admission
The Noland decision (California Court of Appeals, 2025) added a further dimension. The court declined to award attorneys’ fees to opposing counsel partly because opposing counsel had failed to detect and report the fabricated citations. Courts are beginning to signal that both sides have a duty to spot AI hallucinations, not only the party who filed the brief containing them.
Johnson v. Dunn (N.D. Ala., July 2025) stated explicitly that monetary sanctions are proving insufficient to change behavior. The trajectory from 2023 to 2026 is from modest fines toward career consequences.
The Safe Workflow: 3 Steps Every Sanctioned Attorney Skipped
Every documented sanction case shares a common failure pattern. Not one sanctioned attorney had all three of these steps in place before the sanctions occurred.
Step 1: Separate research from drafting in your AI workflow. Use ChatGPT or similar tools for drafting, structure, and idea generation. Use a source-linked legal research tool for any task that produces citations. The tools serve different purposes and should not be substituted for each other.
Step 2: Verify every citation against the primary source before it goes into a filing. Pull the case. Read the section your citation relies on. Confirm the holding says what your citation claims. Confirm the case has not been overruled. This step cannot be delegated to a paralegal without the supervising attorney also completing it. It cannot be completed by asking ChatGPT whether the case is real.
Step 3: Never ask ChatGPT to verify its own citations. Schwartz asked ChatGPT whether Varghese was real. It said yes. It was not. ChatGPT generates a plausible-sounding confirmation because that is the most probable response to that prompt. If you cannot locate a case that AI cited by searching Westlaw or LexisNexis directly, stop. Do not file it. The case almost certainly does not exist.
Legal AI tools built on retrieval-augmented generation change the structure of this problem. Rather than generating citations from pattern prediction and requiring you to verify from scratch, RAG-based tools query primary legal databases before generating output. Every result includes a source link. Verification becomes checking a source, not running an independent database search. NeXa legal research is built on this architecture, querying verified primary sources across all 50 states and federal circuits before returning any research output.
For a deeper look at how NexLaw approaches citation verification for litigators, see why lawyers use NexLaw and how NexLaw compares to other legal AI tools.
Use verified legal AI. Every citation links to its source.
No case goes into your filing unverified.


