Oh No!  AI strikes again.  But is that Three Strikes Against Using It?

We’ve all heard about AI’s role in arbitration and litigation. There are those infamous cases – mentioned in my earlier articles - where lawyers, hoping for a little digital magic, ended up with a lot of digital embarrassment. Not because a computer wrote their brief, but because it hallucinated cases. Yes, it made up cases to support arguments. And guess what? It just happened again.

This time, the drama unfolded in federal court in Texas. The case is  Gauthier v. Goodyear Tire & Rubber, Co., No. 1:23-CV-281 (E.D. Tex. Nov. 25, 2024). The plaintiff was suing for wrongful termination, and in opposing summary judgment, the plaintiff’s brief cited cases that didn’t exist. It even included quotes that were nowhere to be found in the cited cases. Oops.

When opposing counsel pointed this out, the plaintiff’s lawyer still didn’t fix the problems. The court wasn’t amused and issued a show cause order asking why the lawyer shouldn’t be sanctioned under Rule 11 of the Federal Rules of Civil Procedure. The rule, as you might recall, requires lawyers to certify that their claims and contentions are warranted by existing law or non-frivolous arguments for changing the law. Plus, the Texas court has a specific rule to ensure AI-cited authorities are genuine and not figments of the AI’s imagination.

It’s AI’s Fault!

The lawyer admitted to using Claude, an AI brief-generating system, and claimed he used Lexis AI to check for hallucinations, but it failed to spot them. He apologized and asked for permission to submit briefs without the made-up parts.

You Can’t Just Blame AI

The court wasn’t entirely sympathetic. It noted that citing non-existent cases wasted time, money, and resources. The court was particularly unhappy that the lawyer hadn’t checked the authenticity of the cases until the show cause order was issued. The attorney was fined $2000, ordered to attend a class on the ethical use of AI, and had to provide his client with a copy of the court’s order. The court did allow the lawyer to correct his response and let the defendant file a new reply.

So, Should We Give Up on AI?

Not quite. AI can still be incredibly useful and help serve clients efficiently. It can find law quickly and often accurately. It doesn’t just give you a list of cases; it can explain them in prose.

It can even fix up your writing. I drafted this article, then pasted it into Bing AI and asked it to make it more “fun to read.”  I’m not sure it made it that much fun, but it is a little more fun than it was.  Still, I had to take out some expressions and exclamation points that were a little too “fun.”

I recently made a legal query in AI about courts vacating arbitration awards in certain circumstances. It listed two example of cases vacating arbitration awards.  But when I read the cases, they both upheld the awards at issue. Oh oh.

That’s the thing about AI. You can’t trust it – at least not yet, a far as I can see.  AI wants to please you—sometimes too much. So, it might make stuff up that sounds good and would help you if true. But if it isn’t, you can end up in hot water.

This isn’t a completely new problem. Senior lawyers have always been responsible for checking the work of newer lawyers and clerks.  It would be rare to have someone just make up cases.  But it is entirely possible that an overzealous newer lawyers might mischaracterize a case to make it seem more supportive of an argument than it really is.  Or they may edit a quotation from a case in a misleading way.  Mistakes and mischaracterizations can happen whether done by a human or AI.

There are, I should note, AI systems made specifically for legal writing and the like that claim to have a way to eliminate hallucinations.  They would be worth checking out.  But, armed with the knowledge of past problems, I can’t imagine you wouldn’t still keep a watchful eye out for problems.

Accept Help, But Remain in Charge

Think of AI as an eager assistant that wants to please you, but maybe a little too much sometimes. With that in mind, it can present you with interesting arguments and ready access to authorities relevant to your issues. But you can’t blindly trust it any more than you can blindly trust a brand-new associate or clerk. Less so, in fact. Remain skeptical. Courts hold lawyers — not AI — responsible for the briefs they file.

And as the ABA guidance on the ethics of using AI notes (discussed in an earlier article), lawyers can never abandon their responsibility to exercise their judgment and experience in representing clients. They can never put AI in charge of that.

Next
Next

Silicon Valley Arbitration and Mediation Center (SVAMC) Celebrates 10 Years of Innovation in ADR for the Technology Sector