An Arbitrator Gets Caught Delegating Decision-making to AI
We Suspected a Case Like This Would Be Coming
The arbitration world has been waiting for this case for some time. Of course, no one — particularly arbitrators — wanted it to happen. But it seemed inevitable that a court was going to decide what to do when an arbitrator delegated decision-making to AI.
The Quebec Superior Court has now answered that question in ARIHQ c. Santé Québec, 2026 QCCS 1360. [1] The answer is not that AI can never be used in arbitration. The court recognized that technology may aid arbitrators in performing their work. The problem arose because the award contained hallucinated authorities, nonexistent cases, and citations that either did not exist or had nothing to do with the propositions for which they were cited.
The court concluded that this proved that AI didn’t just help inform or draft the decision. AI made the decision. That didn’t fly.
The Law Assumes There Is an Actual Arbitrator
Courts are typically deferential to arbitrators. The grounds for vacating or annulling their awards are narrow. But a key part of that deference assumes that the arbitrator selected by the parties actually made the decision. In ARIHQ, the Quebec court found that the preponderance of evidence led to the conclusion that the “Arbitrator's authority was delegated and that he abdicated his role to review the result.” This was so, it said, because the “doctrinal and jurisprudential references on which the Arbitrator relied were non-existent and hallucinated.”
So, the problem was not merely that the award contained mistakes. Courts see mistakes all the time. The problem was that the nature of the mistakes suggested the arbitrator had not done the analysis and had not actually made the decision.
The Court Did Not Ban AI
Let’s take a minute to consider what the court did not do.
The court did not hold that AI use automatically invalidates an arbitral award. It did not prohibit arbitrators from using AI tools to summarize evidence, organize exhibits, assist with drafting, or streamline research. Modern legal practice would grind to a halt if courts tried to prohibit all use of technology.
Instead, it applied the unremarkable principle that assistance is permissible, but delegation of judgment is not. In the court’s view, the fictitious citations showed a breakdown in the exercise of the arbitrator’s personal decisional responsibility.
That transformed hallucinations from an embarrassing research mistake into evidence bearing directly on the integrity of the arbitral process itself.
The Hallucinations Proved Delegation
Lawyers have spent the last two years discussing hallucinations primarily as a problem with accuracy. But here there was more. The hallucinations became evidence that the arbitrator stopped independently evaluating legal authorities and instead accepted machine-generated output without verification. That evidence brought the legitimacy of the award itself came into question.
The SVAMC Guidelines Saw This Coming
It’s not as though no one saw this coming. As I have earlier mentioned in articles here, the Silicon Valley Arbitration & Mediation Center (“SVAMC”) issued Guidelines on the Use of Artificial Intelligence in Arbitration in 2024. Guideline 6 anticipated this problem:
“An arbitrator shall not delegate any part of their personal mandate to any AI tool. This principle shall particularly apply to the arbitrator’s decision-making process. The use of AI tools by arbitrators shall not replace their independent analysis of the facts, the law, and the evidence.”
SVAMC’s commentary further explains that arbitrators remain responsible for human judgment, discretion, responsibility, and accountability. The Guidelines specifically warn arbitrators to verify AI-generated authorities and not assume citations are accurate merely because an AI system generated them.
The guidelines do not suggest that use of AI itself is improper. In fact, they recognize that AI may improve efficiency and assist with case management, drafting, organization, and research. But they emphasize the same principle the Quebec court enforced: the arbitrator must not delegate decision-making to AI.
Why American Arbitrators and Lawyers Should Pay Attention
The Federal Arbitration Act, of course, says nothing about artificial intelligence. Congress did not have large language models in mind in 1925. And U.S. courts don’t generally regard decisions by Canadian courts as authoritative.
But the reasoning of the Quebec court may well apply to U.S. arbitration even under the narrow grounds for vacating awards found in the Federal Arbitration Act. One likely candidate under the Federal Arbitration Act is section 9(a)(3): “guilty of misbehavior by which the rights of any party have been prejudiced.” Failing to actually make a decision seems to fall in that category.
Section 9(a)(4) might also apply: “the arbitrators exceeded their powers.” The parties give arbitrators the power to make decisions – not delegate them to AI. Especially when AI hallucinated and made significant mistakes.
But will courts inevitably regard hallucinations and other mistakes showing use of AI as proof that the decision itself was made by AI? Probably not, depending on the evidence. In this case, it was more than a few hallucinated cases. Review of the decision showed the arbitrator relied on the non-existent cases in coming to his conclusions.
In future cases, use may be made of programs and systems meant to detect use of AI in writing. So far, though, their accuracy is often questioned. And the trickier thing seems to be proving not only that a decision evidences use of AI — by, for example, having fake citations — but that AI actually made it.
Time will tell.
Where this leaves us
For several years, the discussion around AI in arbitration has been theoretical. Arbitrators were advised to be careful. Institutions issued guidance. Lawyers warned about hallucinations. Now there is a published case vacating an award where the court concluded the arbitrator crossed the line from assistance into delegation.
Still, the grounds for vacating an arbitration award under the FAA are narrow. But if courts find delegation to AI does provides a ground to vacate an award under the FAA, that will move the guidelines from “should not” delegate decision-making to AI to “must not.”
At the moment, that seems to be inevitable. And right. But proof of actual delegation as opposed to sloppy use of AI may be tricky. But the best hope is that a few decisions like the Quebec court’s will deter use of AI as a decision-maker rather than an assistant.
_______________________________________________________________
[1] Full disclosure: The decision is in French. I used AI to translate it. It didn’t decide anything except the translation. I also reviewed a number of reports on the case written in English.