AI and Experts: Oh My!

I've been on an AI writing spree lately. At first, the burning question was: should we even let AI handle legal work? There were some famous blunders, like citing non-existent cases and attributing bogus quotes. Courts were not amused.

But AI is here to stay, so we might as well figure out how to use it. Let's learn from others' missteps with two recent tales.

The Case of the Objecting Nephew and Honest Co-Pilot

Picture this: Saratoga Surrogates Court in New York. Michael S. Weber passed away, leaving his sister, Susan, as trustee for a trust created for his son, Owen.

Susan petitioned for a Judicial Settlement, but Owen objected. He accused Susan of breaching her fiduciary duties by keeping a property on Cat Island in the Bahamas for her vacations instead of selling it for the trust's benefit.

Enter Charles W. Ranson, the accountant. Mr. Ranson calculated damages for lost income from Susan holding onto the property instead of selling it and investing the proceeds. The Court found Mr. Ranson's assumptions were just plain wrong and rejected his calculations.

But there's more. The Court noted that Mr. Ranson said he used Microsoft Co-Pilot to double-check his calculations. Like most of us, the judge had no clue how Co-Pilot really works. Mr. Ranson couldn't explain it either. So, the judge took matters into his own hands, asking Co-Pilot to compute the expected return on investment three times, getting three different answers.

But there’s more.  The Judge then asked Co-Pilot if it was accurate. Co-Pilot replied, "I aim to be accurate within the data I've been trained on and the information I can find for you. That said, my accuracy is only as good as my sources so for critical matters, it's always wise to verify.”

“Are you reliable?” asked the Judge.  CoPilot answered, “You bet. When it comes to providing information and engaging in conversation, I do my best to be as reliable as possible. However, I'm also programmed to advise checking with experts for critical issues. Always good to have a second opinion!"

The Judge then asked, “Are your calculations reliable enough for use in court?” Co-Pilot admitted, “When it comes to legal matters, any calculations or data need to meet strict standards. I can provide accurate info, but it should always be verified by experts and accompanied by professional evaluations before being used in court.”

The Court concluded AI's calculations didn't meet the Frye standard for scientific evidence and expert testimony. AI may be expanding, but it's not necessarily admissible in court yet.

 The Case of the Fake References

This tale unfolds in the United States District Court for the District of Minnesota.

Plaintiffs sued to enjoin a Minnesota Statute forbidding "deepfakes" with the intent to injure a political candidate or influence an election.

The Attorney General submitted a declaration from Jeff Hancock, Professor of Communication at Stanford University. But the declaration included citations to two non-existent articles and misattributed a third. Professor Hancock used GPT-4 to draft his declaration and admitted the mistake. The Attorney General asked to file a corrected report, citing "excusable neglect."

The Court wasn't having it. The Judge observed, "The irony. Professor Hancock, an expert on AI and misinformation, fell victim to AI's siren call in a case about AI's dangers."

Hancock explained his error and stood by his conclusion, but the Court wasn't convinced. Submitting a declaration with fake citations, under penalty of perjury, is a no-go.

Even worse, Hancock typically validated citations with reference software for academic articles but didn't do so for his declaration filed in a United States District Court.

The Lessons

What can we glean from these stories? Here are three takeaways:

1. AI Makes Mistakes: It doesn't mean to, but it does. Always check for errors.  It will even tell you that itself if you ask.

2. Judges Are Skeptical: Don't expect them to accept AI results without scrutiny. Be ready to explain how the results are reliable and run a Frye or Daubert analysis on admissibility.

3. Check AI Citations: AI's mistakes are your mistakes. Not checking may be unforgivable.

AI Says...

Inspired by the wise judge from the first story, let's give Co-Pilot the final word.

I asked, "How can I best check to see if your citations to authorities are accurate?"

Co-Pilot said:

"Checking the accuracy of AI citations is important. Here are some steps:

1. Direct Links: Click on provided links to verify information yourself.

2. Cross-Verification: Compare information with other reputable sources.

3. Use Trusted Databases: Refer to trusted databases and websites like Google Scholar, PubMed, or official sites.

4. Check Citation Details: Ensure the citation includes author, title, publication date, and publisher.

5. Ask for Specifics: If a citation seems vague, ask for more details.

If you have a specific citation to verify, feel free to share, and I can assist you."

There you have it—adventure, irony, and advice. Stay sharp, and may your legal AI encounters be error-free!

Next
Next

Big Trademark Verdict for Stone Brewing Upheld and Craft Breweries Rejoice: Lessons to Be Learned?