Using AI Tools Can Destroy Attorney-Client Privilege: What Every Client Needs to Know
In a landmark 2026 ruling, a federal court held that using a publicly available AI tool — such as ChatGPT or Claude — without attorney direction destroys attorney-client privilege. AI platforms’ privacy policies eliminate the confidentiality privilege requires. Ohio clients in criminal, civil, or family law matters should stop sharing case details with consumer AI tools immediately.
If you have been researching your legal situation using ChatGPT, Google Gemini, Claude, or any other publicly available artificial intelligence tool, this article is essential reading. A February 2026 federal court decision — the first of its kind in the nation — has fundamentally changed what clients and attorneys need to know about AI tools and the protection of confidential legal communications.
The short version: typing your case facts into a free AI chatbot may permanently waive attorney-client privilege over that information. What you tell an AI platform, a court can order you to hand over to the government or opposing counsel.
Here is what happened, why it matters to you as an Ohio resident, and what you need to do now.
The Landmark Case: United States v. Heppner (S.D.N.Y. 2026)
In United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026), U.S. District Judge Jed S. Rakoff ruled that thirty-one documents created by defendant Bradley Heppner using Claude — a publicly available generative AI tool — were not protected by attorney-client privilege or the work-product doctrine. As the National Law Review reported, the case marked a case of first impression with potential nationwide implications for anyone seeking to withhold information from government investigations and civil litigation.
Heppner faced charges of securities fraud, wire fraud, conspiracy, making false statements to auditors, and falsifying corporate records. After receiving a federal grand jury subpoena, he used Claude on his own initiative — without attorney direction — to analyze his potential legal exposure, explore possible defenses, and draft responses. He later shared the AI-generated documents with defense counsel. Government agents seized the documents from his computer during a search of his residence.
The government moved to compel production. The court granted the motion, holding the documents were neither privileged nor work product.
Why the Court Rejected Privilege — All Four Grounds
According to a detailed analysis by Epstein Becker & Green, P.C. in the National Law Review, Judge Rakoff rejected every argument for privilege protection on four independent grounds:
The documents were communications between the defendant and a third-party software platform, not between the defendant and his lawyer. Counsel’s involvement in generating the documents is required; sharing the results with an attorney afterward does not retroactively create privilege.
Anthropic’s privacy policy — which every Claude user must accept — explicitly states that the company collects user inputs and AI outputs, uses that data to train the AI, and reserves the right to disclose that data to governmental regulatory authorities. There is no reasonable expectation of confidentiality.
Heppner communicated with Claude on his own initiative, not at the direction of counsel for the purpose of obtaining legal advice. The court also noted that Claude expressly disclaims being a lawyer and states it cannot provide legal advice — privilege cannot attach to communications with such a tool.
The work-product doctrine protects materials prepared by or at the direction of counsel in anticipation of litigation. Because the AI documents were not prepared at counsel’s request and did not reflect defense counsel’s strategy, they received no protection.
As K&L Gates explained, courts evaluating privilege claims over AI data will focus on whether the platform is open or closed, whether contractual confidentiality protections exist, and whether counsel directed or supervised the AI use.
A Contrasting Case: When AI Use Was Protected
Not every AI-related privilege ruling has gone against privilege. As K&L Gates noted, in Warner v. Gilbarco (E.D. Mich., Feb. 10, 2026), a court found that AI-assisted internal analysis and drafting were protected by the work-product doctrine. The key distinction: the AI use in Warner was directed by counsel, and the AI-generated content was not disclosed to an adversary.
The contrast between Heppner and Warner illustrates the variables courts are examining: who directed the AI use, what platform was used, and whether confidentiality was preserved throughout. The outcome hinges on facts, not just the technology.
What the ABA Says: Formal Opinion 512 and Rule 1.6
The American Bar Association has weighed in on these risks. As discussed in ABA Business Law Today, ABA Formal Opinion 512 identifies confidentiality as a central ethical concern when attorneys use generative AI. Under Model Rule 1.6 (Confidentiality of Information), lawyers have a duty to make reasonable efforts to prevent unauthorized disclosure of client information — an obligation that extends to every AI tool used by attorneys and their staff.
The ABA specifically warns that lawyers must understand whether the AI systems they use are “self-learning” — meaning whether those systems will use client-provided information to train future responses that could resurface for other users. As Frantz Ward LLP explains, public generative AI systems may retain inputs as part of their training data, raising the possibility that privileged content could appear in future responses to entirely different users.
The Hidden Risk: Staff, Paralegals, and Client Self-Research
The privilege problem is not limited to clients acting on their own. Bloomberg Law has reported that even when an attorney is careful, a paralegal or legal assistant working at the attorney’s direction may independently rely on public AI tools and unknowingly expose privileged information.
This creates a firm-wide compliance problem. Law firms that do not have clear, enforced policies governing AI tool use by every person who touches client files — attorneys, paralegals, legal assistants, and administrative staff — face the same waiver risks as individual clients who use AI on their own.
Many people today turn to ChatGPT or similar tools to understand their legal situation before, during, and even after retaining an attorney. If a client types case-specific facts into a public AI tool at any point during their representation, those inputs may be discoverable by the government or opposing counsel.
AI Privilege Risk at a Glance
Use this quick-reference table to understand where privilege risk is highest under the Heppner framework:
| Scenario | Privilege Risk | Protection Level |
|---|---|---|
| Consumer AI — no attorney direction (e.g., free ChatGPT, standard Claude) | Very High | None — Heppner outcome |
| Client uses AI independently to research their own case | Very High | Likely full waiver |
| Paralegal uses public AI tool without attorney oversight | High | Minimal |
| Attorney-directed AI use on a public platform | Moderate | Kovel argument possible; not guaranteed |
| Enterprise AI with contractual confidentiality, attorney-directed | Lower | Stronger — still not guaranteed |
What Ohio Clients Should Do Right Now
Whether you are involved in a criminal defense matter, a custody dispute, a domestic violence case, or any other legal proceeding, these steps apply to you:
-
1Stop using consumer AI tools for case-related research or analysis. Do not input any facts about your legal situation into ChatGPT, Claude, Gemini, Copilot, or any other publicly available AI platform.
-
2Tell your attorney what you have already shared with AI tools. Early disclosure allows your legal team to assess and potentially limit the damage before it becomes a problem in court.
-
3Do not send case-related information to any AI tool via email, chat, or prompt. Even pasting in a document you received from your attorney could expose privileged communications.
-
4Ask your attorney about their firm’s AI policy. Reputable firms should have clear written policies governing how AI is used by attorneys and staff in connection with your case.
-
5Use AI for general education only. Reading publicly available articles about how criminal defense works is very different from entering your specific case facts into an AI prompt.
Frequently Asked Questions
Does using ChatGPT or Claude destroy attorney-client privilege?
It can — and now there is a federal court ruling that says exactly that. In United States v. Heppner, Judge Rakoff held that documents created using a publicly available AI tool were not protected by attorney-client privilege because the AI platform’s privacy policy eliminates the confidentiality that privilege requires. The risk applies to any publicly available AI tool whose provider retains, trains on, or discloses user inputs.
What is the Kovel doctrine and can it protect AI-assisted work?
The Kovel doctrine extends attorney-client privilege to non-attorney third parties — such as accountants or translators — when those parties help the attorney provide legal advice. Judge Rakoff acknowledged in Heppner that AI might potentially qualify as a Kovel agent if the client used it specifically at counsel’s direction and with a reasonable expectation of confidentiality. However, the court found those conditions did not exist. This narrow path requires active attorney direction and a platform with meaningful confidentiality protections.
Is work-product protection affected by using AI?
Yes. Courts apply the same traditional work-product framework to AI-generated content. Materials prepared independently by a client, without counsel’s direction, do not qualify — even if they address litigation strategy. Materials generated at the explicit direction of counsel to analyze litigation claims or defense strategy may qualify, particularly if the AI platform has robust confidentiality terms. The question is always who directed the AI use and for what purpose.
What AI tools are safest to use during a legal case?
Enterprise-grade AI platforms with contractual confidentiality terms — where the provider does not retain, train on, or share user inputs with third parties — present lower (though not zero) risk when used under attorney direction. Free consumer tools including standard ChatGPT accounts, standard Claude.ai accounts, and Google Gemini carry the highest risk. No publicly available consumer AI tool should be used to input confidential case information.
What should I do if I already used a public AI tool while working with my attorney?
Tell your attorney immediately. Early disclosure allows your legal team to assess the scope of potential waiver, take steps to limit further exposure, and advise on how to address the issue before it becomes a problem in litigation. Delaying disclosure typically makes the situation worse. This is not a situation where hoping no one finds out is a viable strategy.
Does this ruling apply in Ohio courts?
United States v. Heppner was decided by a federal court in New York and is not binding on Ohio state courts. However, the reasoning rests on foundational privilege principles that apply universally, and Ohio courts analyzing similar issues are likely to reach comparable conclusions. Additionally, federal criminal cases in Ohio are governed by federal law, and Heppner’s reasoning could be directly persuasive in federal proceedings in the Northern and Southern Districts of Ohio.
External Sources & Citations
- 1.
National Law Review — Epstein Becker & Green, P.C. (March 12, 2026): “Claude Is Not an Attorney” — Individuals Risk Abandoning the Attorney-Client Privilege and Attorney Work-Product Doctrine When Consulting AI - 2.
K&L Gates (Feb. 23, 2026): Litigation Minute: Generative AI Data, Attorney-Client Privilege, and the Work-Product Doctrine - 3.
JD Supra — Harris Beach Murtha (Feb. 18, 2026): In a First, Court Finds Using AI Tools Ends Attorney-Client Privilege - 4.
Frantz Ward LLP (Aug. 2025): Privilege Considerations When Using Generative AI in Legal Practice - 5.
Bloomberg Law (Jan. 2025): Careless Generative AI Use Puts Attorney-Client Privilege at Risk - 6.
ABA Business Law Today (Sept. 2024): AI and Attorney-Client Privilege: A Brave New World for Lawyers
Have Questions About AI and Your Ohio Legal Case?
At HV Legal, LLC, we counsel clients on protecting privileged communications from the moment we begin working together. Do not wait for the government or opposing counsel to raise your ChatGPT history. Talk to us first.
