US v. Heppner: Why Your AI Chats Are Not Protected by Attorney-Client Privilege
If you are using ChatGPT, Claude, or Gemini to ask legal questions about your job, draft an EEOC complaint, or decode a severance agreement, you need to stop immediately. A landmark 2026 ruling in the Southern District of New York has fundamentally changed how courts view artificial intelligence and legal privacy.
In United States v. Heppner, a federal judge ruled that conversations with AI chatbots are not protected by attorney-client privilege. This means that if you sue your employer, or if they investigate you, every prompt you typed into an AI tool can be subpoenaed and used against you as evidence.
Here is a breakdown of the Heppner decision, why the judge ruled the way he did, and what every employee must know before using AI for workplace disputes.
What Happened in US v. Heppner?
The case centers around the former CEO of GWG Holdings, who was indicted for securities fraud. When he discovered he was under federal investigation, he received confidential legal strategy documents from his defense attorney.
Seeking to better understand the strategy and prepare his defense, the CEO pasted those confidential documents into Anthropic's Claude AI chatbot. When the FBI later seized his electronic devices, they discovered 31 documents containing his chat history with Claude.
His defense team immediately filed a motion to suppress the evidence, arguing that because the original documents were protected by attorney-client privilege, the AI conversation analyzing those documents should also be privileged.
The court disagreed.
Judge Jed Rakoff's Landmark Ruling
The ruling was delivered by Judge Jed S. Rakoff, a highly respected Senior Judge in the Southern District of New York. Judge Rakoff is a former federal prosecutor and a renowned legal maverick who has spent decades shaping corporate and criminal law. When he issues a ruling on a novel legal issue, courts across the country pay attention.
Judge Rakoff denied the motion to suppress the AI chats, stating his reasoning in blunt terms: "Claude is not an attorney. That alone disposes of Heppner's claim."
The court's decision rested on two critical pillars of law and technology:
1.AI is a Third Party: Attorney-client privilege only protects communications between a client and their lawyer (or the lawyer's direct agents). Because Claude is a commercial software product owned by Anthropic, pasting confidential information into the chat is legally equivalent to handing that information to a stranger on the street. By sharing the information with a third party, the privilege was broken.
2.No Expectation of Privacy: The court noted that the privacy policies of consumer AI tools explicitly state that user data may be collected, reviewed, and disclosed to government authorities or in litigation. Therefore, no user can claim a reasonable expectation of privacy when using these platforms.
How AI Privacy Policies Destroy Legal Privilege
The Heppner ruling highlights a massive trap that millions of workers fall into every day: failing to read the terms of service.
When you use a free or standard consumer AI tool, you are agreeing to data practices that completely undermine legal confidentiality. Here is a comparison of how the major AI platforms handle your data:
Claude
Anthropic
Collects inputs and outputs. Explicitly states data can be disclosed to government authorities and in litigation.
ChatGPT
OpenAI
May use prompts to train future models. Data is shared with third-party service providers and vendors.
Gemini
Google
Conversations may be reviewed by human reviewers to improve services. Warns users not to share sensitive information.
Because these companies reserve the right to access, review, and share your data, courts will not protect those conversations under any legal privilege doctrine.
What This Means for Employment Lawsuits
The Heppner decision is already sending shockwaves through employment litigation. Defense attorneys representing corporations are weaponizing this ruling to demand access to employees' AI chat histories during the discovery phase of lawsuits.
If you are involved in a workplace dispute, here is how your AI usage can be used against you:
•Proving Premeditation: If you asked ChatGPT, "Can I get fired for complaining about my boss?" two months before you actually filed a complaint, your employer will argue that your complaint was not genuine, but rather a premeditated setup for a lawsuit.
•Discovering Weaknesses: If you asked an AI to "find the holes in my discrimination claim," opposing counsel can subpoena that chat and use the AI's analysis to dismantle your case.
•Drafting Evidence: If you used AI to draft your EEOC charge or an internal HR complaint, the employer may demand the original prompts to see what facts you altered, exaggerated, or omitted before submitting the final version.
4 Rules for Using AI in Workplace Disputes
If you are facing a layoff, discrimination, or a hostile work environment, you must protect your legal strategy. Follow these four rules:
Lawyer First, AI Second: Never use an AI tool to research your specific legal situation before consulting an employment attorney. Conversations with a licensed attorney are privileged; conversations with a chatbot are not.
Do Not Paste Confidential Documents: Never paste your severance agreement, non-compete clause, performance reviews, or HR emails into a public AI tool.
Understand the Enterprise Exception: The Heppner ruling applies to consumer AI tools. If your lawyer uses a closed, enterprise-grade AI system that guarantees data privacy and does not train on user inputs, that work product may still be protected. But as an individual employee using a free app, you have no protection.
Assume Your Employer Will Read It: Treat every prompt you type into an AI chatbot as if it will be printed out and handed to your company's HR department and legal team.
References
[1] Harvard Law Review. "United States v. Heppner." March 2026.
[2] U.S. Department of Justice. "US v. Heppner, 25 Cr. 503." Southern District of New York.

