In June 2023, Judge P. Kevin Castel of the Southern District of New York sanctioned two attorneys and their law firm after they submitted a legal brief containing six entirely fabricated court cases — all generated by ChatGPT. The case was Mata v. Avianca, Inc. The attorneys did not verify a single citation. When caught, they doubled down and submitted more AI-generated fabrications to the court. Judge Castel fined them $5,000 and found they had acted with subjective bad faith under Rule 11.
That case was about citation hallucination. But the AI risk facing litigators in 2026 goes further — into privilege, confidentiality, and work product protection. Every time confidential client information is entered into a consumer AI platform, a different kind of damage is being done. One that opposing counsel can exploit in discovery. Here is what your firm needs to know.
Consumer vs. Enterprise AI: The Distinction That Protects Your Practice
Not all AI carries the same legal risk. The distinction that matters for privilege and confidentiality is not which tool is most powerful it is whether the platform is built for enterprise legal use and whether attorney supervision is documented.
What Your Firm Must Do Before Using Any AI Tool on a Legal Matter
Here is the complete checklist your firm needs before uploading any client information to any AI platform:
Under HIPAA, any third party processing Protected Health Information must sign a BAA before a single file is uploaded. No BAA, no upload. This applies to PI firms, medical malpractice practices, and any litigator working with medical records.
If the policy permits the vendor to retain, review, or train on your inputs — as Anthropic's consumer policy does that platform is not appropriate for privileged legal work. This is not an edge case.
This is the single most important procedural step. Document that every AI workflow was initiated and supervised by the attorney of record. Undirected AI use by clients or non-attorney staff is a work product vulnerability.
Cross-firm data visibility is a confidentiality breach waiting to happen. Confirm in writing that your vendor isolates every firm's data completely.
Independent security auditing is the baseline for enterprise legal technology. Any vendor that cannot produce SOC 2 documentation should not be processing your client files.
As reported in The Legal Intelligencer your clients are already typing their case details into consumer chatbots right now. The conversation you need to have at intake is simple: do not use any AI tool to think through your case unless I direct you to. That one instruction could save a case.
How NexLaw Is Built for Privilege and Work Product Protection
NexLaw was built from the ground up for litigation teams. Privilege protection is not a feature — it is the architecture.
No data retention for model training
Your client data is never used to train NexLaw’s models. Not for improvement. Not for research. Not for any purpose beyond delivering your output. What goes in stays yours — contractually guaranteed.
Full data isolation by firm
Every firm’s data on NexLaw is completely isolated from every other firm’s data. There is no cross-firm visibility, no commingling, and no shared model that could surface one firm’s information to another.
Attorney-directed workflows
Every NexLaw workflow is designed to operate under attorney supervision. Outputs are presented for attorney review and verification — not as final work product. The attorney remains in the loop at every stage.
SOC 2 Type II certified and HIPAA compliant
NexLaw operates on SOC 2 Type II certified infrastructure with end-to-end encryption in transit and at rest. For practices handling PHI, NexLaw signs a Business Associate Agreement with every firm before any protected health information is uploaded. Full security documentation is available at nexlaw.ai/trust-center.
ABA Formal Opinion 512 compliant
The American Bar Association extended its cloud-computing framework to generative AI in ABA Formal Opinion 512, requiring lawyers to conduct due diligence on AI vendors and take reasonable steps to safeguard client data. NexLaw’s enterprise architecture is designed to meet that standard.
Mata v. Avianca established one principle that has only become more important since 2023 — AI outputs in legal work require human verification, attorney supervision, and the right platform. The answer is not to stop using AI. It is to stop using consumer AI for legal work and build every workflow on a platform with contractual confidentiality, attorney-directed supervision, and the security certifications that give your clients real protection.
Use AI without risking your clients' privilege
NexLaw is built for enterprise legal security. SOC 2 certified. HIPAA compliant.
Attorney-directed workflows. BAA included. See the difference in 3 days.


