Walters v. OpenAI: When AI Hallucinations Meet Defamation Law
A few weeks ago, we looked at a meeting between patents and AI. That didn’t go well for AI. Saying, “have the robot do it,” isn’t an invention.
This week, let’s look at what happens when AI meets defamation law. This time AI came out pretty well.
One more thing before we start. I asked ChatGPT to draft this article. But, proving the point of the case, it made a few errors. The worst was probably saying the plaintiff claimed he was falsely accused of sexual harassment. He wasn’t. He was falsely accused of embezzlement. (AI just won’t give that guy a break!) AI also made up quotes by the court. Don’t worry, I read the case and fixed all that. But this emphasizes why you can’t just accept what AI gives you. You have to check.
OK. Let’s have AI tells us about the case (with a few corrections made by a human):
The Facts
In Walters v. OpenAI, Walters v. OpenAI, No. 23-A-04860-2 (Ga. Super. Ct., May 19, 2025.) Mark Walters, a nationally syndicated radio host and prominent gun rights advocate, filed a defamation lawsuit against OpenAI in 2023. The suit stemmed from an incident where a journalist used ChatGPT to summarize a lawsuit filed by the Second Amendment Foundation (SAF) against the Washington’s Attorney General. ChatGPT erroneously generated a response falsely stating that Walters, who had no involvement in the case, was accused of embezzling funds from SAF. The journalist recognized the error, verified the actual complaint, and did not publish the false information. Nonetheless, Walters sued OpenAI, alleging that the AI-generated false statement was defamatory.
The Holding
The Superior Court of Gwinnett County, Georgia, granted summary judgment in favor of OpenAI, rejecting Walters' defamation claim. The court's decision rested on several key findings:
1. No Defamatory Meaning: The court determined that the ChatGPT output could not be reasonably understood as stating actual facts about Walters. Given ChatGPT's disclaimers about potential inaccuracies and the journalist's acknowledgment of the error, the court concluded that no reasonable reader would interpret the AI's response as a factual assertion. The court stated: "No reasonable person would interpret the AI-generated content in question as a literal or factual assertion, particularly in light of the well-known limitations and disclaimers attached to the tool."
2. Neither Actual Malice nor Negligence Because the court decided he was a public figure, Walters was required to prove that OpenAI acted with "actual malice"—that is, with knowledge of the falsity or reckless disregard for the truth. The court found no evidence that OpenAI had such knowledge or recklessness. It emphasized that "Plaintiff has not presented any evidence that Defendant knowingly published false information or acted with reckless disregard to its truth or falsity."
OpenAI wasn’t negligent either. It took reasonable steps to train its model in an effort to eliminate mistakes and warned users that mistakes could still happen.
3. No Demonstrated Damages: Walters admitted that he suffered no actual harm from the incident, as the false information was not published or disseminated beyond the initial interaction. Under Georgia law, plaintiffs must request a correction or retraction before seeking punitive damages, which Walters did not do.
Final Thoughts
This decision suggests that, at least under current legal standards, AI developers like OpenAI may not be held liable for isolated inaccuracies generated by their models, especially when they have taken steps to warn users and mitigate errors.
Of course, this was one court and one unique fact situation. We can expect courts to be presented with many further encounters between AI and the law. That should be interesting.