AI at the Hearing: Keeping It Real

 Whether you're on the bench or at the end of the conference room table, you’ll likely encounter evidence shaped, or even wholly fabricated, by AI.

The National Center for State Courts (NCSC) has issued two Bench Cards to aid judges in dealing with AI- generated evidence. One is for acknowledged AI-generated evidence. The other is for unacknowledged or allegedly AI-altered content.

These cards can also help arbitrators sort all that out. And it’s useful for advocates to see how decision-makers may be looking at their AI evidence.

Bench Card #1: Evaluating Acknowledged AI-Generated Evidence

This first card addresses evidence where the AI involvement is acknowledged. Maybe a party has cleaned up a blurry video, modeled a 3D crash simulation, or used machine learning to analyze thousands of data points. It’s not fake, but it has been shaped by algorithms. That raises some familiar evidentiary questions, plus a few new ones.

The core principle is to treat it like any other technical evidence. Here is what to look for:
- Clear labeling: If it’s AI-assisted, the trier of fact should always know it.
- Common uses: Medical visuals, expert simulations, accident reconstructions, IP comparisons, or even AI-enhanced audio and surveillance footage are often AI generated- Scientific grounding: Is the AI tool reliable, tested, and accepted in the relevant field?
- Transparency required: Courts (and arbitrators) can demand a step-by-step log of what the AI did and preserve the original materials.
- Chain of custody still matters: Enhanced doesn’t mean exempt from scrutiny. In short, if it’s been AI-polished, determine whether it is clear, reliable, and helpful, or just eye-catching but misleading.

Bench Card #2: Evaluating Unacknowledged AI-Generated Evidence

This second card helps with a trickier situation: one party offers evidence and the other says, "Objection, that looks AI-generated!"

Here, the focus shifts from admitted AI use to possible manipulation, fakery, or fabrication. That could include deepfakes, voice clones, altered screenshots, etc. No judge or arbitrator wants to allow false digital evidence that looks real. Here is a checklist of things to consider:

1 . Be especially skeptical if it arrived via anonymous upload or obscure link.

2. Access: Who’s had their hands on it? Multiple handlers means more risk.

3. Preservation: Was the original format saved? If not, be wary. Consider appointing a neutral forensic expert.

4. Chain of Custody: Can they walk you through every step of where it’s been? Gaps invite questions.

5. Alterations: Has the evidence been converted, compressed, cropped, or cleaned up? If yes, why and how? Forensic analysis may can help spot use of AI. Analyzing metadata may help in spotting issues here. If a party cannot provide metadata, that’s a red flag.

In short, if it seems off, take a harder look. The more seamless and sophisticated the AI gets, the more careful decision-makers have to be.

Takeaway for Arbitrators and Advocates

While these bench cards are directed at judges, arbitrators face the same core challenge: ensuring that what’s presented as “evidence” is actually trustworthy. The framework offered by the bench cards—structured inquiry, emphasis on reliability and disclosure, and a strong chain-of-custody mindset—translates well to arbitration. Evidentiary rules may be looser in arbitration, but the need for fairness and integrity is just as vital.

Advocates should also scrutinize evidence before offering it. This guidance will help you do that before an arbitrator does.

So next time someone presents a polished exhibit or a mysterious PDF, try these checklists to help ask the right questions. Authenticity can’t be assumed in the age of AI.

You can find the bench cards at https://www.ncsc.org/resources-courts/ai-generated-evidence-guide-judges

Next
Next

An Arbitration Clause Mind Experiment - When More is Less