AI, Privilege, and the Courts: Reading Heppner After Warner

Not long ago, I wrote an article about over the recent case of United States v. Heppner. Over the last few weeks, thinking about that case, I imagine many lawyers have had the same conversation. “So, I just read about United States v. Heppner. It almost seems that, if a client uses AI while working on a case, privilege for the prompts and answers is gone, right?”

If you skimmed some of the early commentary about Heppner, that reaction would be understandable. A number of articles made the decision sound as if the mere use of generative AI—especially by a pro se litigant—automatically destroys privilege. I even created a headline for my article that read “Even Smart Computers Aren’t Your Lawyer.”

That is true, but I now have read the even more recent decision in Warner v. Gilbarco which provides some more perspective. It confirms that courts are not inventing a new anti-AI rule of privilege. They are applying the same principles lawyers have been living with for decades to different fact patterns that involve AI. Let’s take a closer look.

Heppner, in Brief

First, back to Heppner. There, the defendant communicated with Claud, a generative AI system and later argued those exchanges were protected by attorney-client privilege and the work-product doctrine. The case is all the more serious because the defendant asked Claude about a grand jury subpoena just served on him. The court rejected both arguments.

It reasoned this way: The communications were not between the defendant and counsel; they were between the defendant and an AI platform. The court also concluded the exchanges were not confidential in light of the platform’s policies governing storage and potential use of user inputs. The work-product argument failed for a related reason: the material had been generated by the defendant independently, not at the direction of counsel and not as part of counsel’s litigation strategy. In short, the court treated the AI exchanges as the defendant’s personal use of a public tool. It was not as part of the attorney-client relationship or the lawyer’s preparation of the case.

Warner Looks at the Problem Differently

A few weeks later, a federal court in Michigan approached a similar issue. In Warner v. Gilbarco, the defendants attempted to compel discovery about a pro se plaintiff’s use of generative AI tools during the litigation. The court declined.

That court reasoned this way: What the defendants were really seeking was the plaintiff’s internal drafting process and litigation analysis. That’s exactly the kind of material the work-product doctrine is designed to protect. The court also rejected the argument that using an AI tool automatically waives work-product protection. Waiver generally requires disclosure to an adversary, or at least disclosure in a way likely to place the information in an adversary’s hands. Simply using software to assist in drafting does not do that.

AI as “Just a Tool”

Two details from Warner are easy to miss but worth noting. First, the plaintiff was proceeding pro se, so lawyers were not running the prompts. Even so, the court recognized that a pro se litigant may assert work-product protection for litigation materials reflecting mental impressions or strategy. In effect, the litigant was performing the role a lawyer normally would, and the doctrine still applied.

Second, in its most memorable line, the court observed that generative AI systems are “tools, not persons.” So, if AI is simply a drafting or analytical tool, its role is not fundamentally different from many other technologies lawyers already use.

SaaS Reality and Modern Law Practice

I recently learned a new abbreviation: “SaaS” for “Software as a Service.” It is easy to forget how much of modern legal practice already runs through third-party software, i.e., SaaS. Lawyers routinely store documents in platforms such as Microsoft 365, Google Workspace, and Dropbox. Email itself—whether Gmail or Outlook—runs through servers someone else operates. Courts have recognized that using those systems does not automatically destroy privilege.

Warner implicitly reflects this reality. It treats AI as a software tool, thus places it alongside the many software-as-a-service platforms lawyers already rely upon every day. The privilege analysis therefore turns on familiar questions—confidentiality, disclosure, and whether the material reveals protected litigation strategy—not on the mere fact that an algorithm helped produce the text.

Reading the Cases Together

Once the facts are lined up, the two decisions make sense and aren’t really contradictory. In Heppner, the litigant sought to shield his own communications with a public AI system that were neither confidential nor attorney directed. In Warner, the opposing party sought access to a litigant’s internal reasoning and drafting process. Those are different questions, and the doctrines apply differently to each.

Both courts relied on the same traditional principles. Attorney-client privilege protects confidential communications made for the purpose of obtaining legal advice. The work-product doctrine protects the mental impressions and strategies developed in preparing a case. AI did not change those rules; it simply gave courts a new factual setting in which to apply them.

Where This Leaves Us

So, what should lawyers take away from these early cases?

First, do not overread Heppner. It does not stand for the sweeping proposition that AI use automatically destroys privilege. The decision turned on specific facts—particularly the lack of confidentiality and the absence of attorney involvement.

Second, Warner suggests courts will be reluctant to allow discovery simply because a litigant used AI while drafting or analyzing a case. If the request effectively seeks litigation strategy or mental impressions, the work-product doctrine applies.

Finally—and perhaps most importantly—the law is still developing. Courts are only beginning to deal with these issues, and technology will evolve faster than the decisions. For now, the safest approach is also the oldest one. Understand the tools you are using, understand where the data goes, and assume courts will continue asking the same questions they always have about confidentiality and litigation strategy.

In short, the technology may be new, but the privilege rules are not.

Bonus tips

And I have one last set of tips for you. If you are using Claude, ChatGPT or some other AI platform, get into your settings and turn off “improve the model for everyone,” or the like. You don’t want to have the system train from your data. That isn’t keeping things confidential. This may not meet all the issues Heppner found about third parties having access to the data. But limiting that access that will help you emphasize the Warner court’s point that the test is disclosure in a way likely to place the information in an adversary’s hands. Less exposure of the data makes that less likely.

Also, don’t archive your chats forever. If you don’t need them, delete them (unless you are on a litigation hold or the like.)  That reduces the chance someone will get hold of them who shouldn’t.

Next, find the agreements you have with the AI platform. You may not know you have one, but you do. Then figure out what it does about keeping your data private and for what it can use it. Go to terms of use, privacy, and data usage policies, for example, of ChatGPT. It will explain how the data is collected, kept, and used. That should help you decide whether you should share sensitive information with the platform.

Finally, if you are at a firm with an IT department, get in touch with them to see how they are dealing with AI and confidentiality. In the end, it is your responsibility to take steps to make sure you aren’t blowing privileges by using AI.

 

Previous
Previous

When “No Appeal” Doesn’t Mean No Appeal: Lanesborough, the FAA, and What You Can (and Cannot) Waive

Next
Next

Congress opened the courthouse door, but how far? Cases, claims and the ending forced arbitration of sexual harassment and assault claims act