SVAMC Issues Guidelines on Use of AI in Arbitration

In September of 2023, I wrote here that SVAMC was leading the way by providing draft guidelines on use of AI in arbitration.  The draft guidelines were circulated, discussed, and finalized.  They have now been issued.

 What is SVAMC?

SVAMC is the Silicon Valley Arbitration and Mediation Center.  It is, in many ways, in a perfect position to undertake the task of formulating guidelines for use of AI in Arbitration.  A large percentage of AI companies are located in Northern California, as is SVAMC, although it has members around the world.

SVAMC does not administer cases. Rather, it collaborates with leading ADR providers, technology companies, law firms, neutrals, and universities to address the merits of arbitration and mediation in resolving technology and technology-related disputes.  

SVAMC also publishes the annual List of the World’s Leading Technology Neutrals known as “The Tech List®.”  The Tech List is a peer-vetted list comprising exceptionally qualified arbitrators and mediators in the US and globally, all having particular experience and skill in the technology sector. Members of the list were extensively involved in the drafting and review of the Guidelines.  

You will want to review the Guidelines and comments in full.  You’ll find them here:

https://svamc.org/wp-content/uploads/SVAMC-AI-Guidelines-First-Edition.pdf?utm_source=All+SVAMC+Contacts&utm_campaign=30b89ce1ab-EMAIL_CAMPAIGN_2020_04_18_12_27_COPY_01&utm_medium=email&utm_term=0_934ca19b39-30b89ce1ab-514384661

In the meantime, though, here is a summary of the Guidelines.  Each Guideline also includes commentary.  The commentary provides further background and observations useful to arbitrators and counsel dealing with use of AI in arbitration proceedings.

 The Guidelines

             Defining AI

AI is ubiquitous.  I’m using it right now as Microsoft Word checks and may even correct my spelling.  You phone uses plenty of AI to give you directions, correct (or in some cases ruin) your spelling, work your camera, recognize your voice, and the like.  That’s not what the Guidelines are worried about.  Instead, in the Guidelines, “’AI” refers to computer systems that perform tasks commonly associated with human cognition, such as understanding natural language, recognizing complex semantic patterns, and generating human-like outputs.”       

SVAMC provides seven Guidelines for us of this type of AI.

Guideline 1: Understanding the uses, limitations, and risks of AI application

Key risks of AI use include: (1) the “black box problem,” (2) quality and representation of the training data, (3) errors or hallucinations, and (4) augmentation of biases. 

                         The black box

The “black box problem” arises, as the commentary explains,  because AI’s “outputs are a product of infinitely complex probabilistic calculations rather than intelligible ‘reasoning’ . . .  Despite any appearance otherwise, currently available AI tools lack self-awareness or the ability to explain their own algorithms.”  Thus, as much as practical, arbitration participants are encouraged to use “explainable AI’ to the extent possible.  “Explainable AI” is “a set of processes and methods that allows human users to comprehend how an AI system arrives at a certain output based on specific inputs.”  Still, the Guidelines recognize that a complete understanding of how any AI system works is likely beyond the technical proficiency of most participants in an arbitration proceeding.   

                        Training data 

Even in the brave new world of AI, the old addage of “garbage in , garbage out” still applies. The output of AI is only as good as its inputs.  Participants need to understand what data has been used to train the AI generative tool and perhaps seek a tool that is trained on a more appropriate data set. 

                         Errors and hallucinations

As the commentary explains, hallucinations arise because “models use mathematical probabilities (derived from linguistic and semantic patterns in their training data) to generate a fluent and coherent response to any question. However, they typically cannot assess the accuracy of the resulting output.”  In other words, as explained in an earlier article, AI generated material can sound great, but it may be dead wrong.   

We are all by now familiar with the cases where lawyers used AI to generate briefs in which AI just made up the cases.  Judges didn’t like that, and some imposed sanctions.  The cure to this is to train on the right data set and check for accuracy.  “Prompt engineering,” that is, carefully formulating the query in a way that that will formulate a correct response can also help with – but not eliminate – this problem. 

                         Historic biases

The training of an AI tool may augment biases.  Historic discrimination may, for example, be carried into searches for individuals to perform important roles in arbitrations, including arbitrators, experts,  and counsel.  Users of AI need to be aware of possible bias and be very careful.  This is particularly true if users don’t know what data the system was trained on or understand its algorithm.  

The Guideline

Recognizing the possible problems with use of AI, Guideline 1 requires that: “All participants using AI tools in connection with an arbitration should make reasonable efforts to understand each AI tool’s relevant limitations, biases, and risks and, to the extent possible, mitigate them.”

 Guideline 2: Safeguarding confidentiality

Arbitrators generally have obligations to maintain confidentiality of arbitration proceedings.  Lawyers generally have confidentiality obligations to their clients.  Protective orders may also be in place requiring confidentiality of information.   But many AI systems are public and use data submitted to train the system for the benefit of other users.  So, use of these systems can compromise confidentiality. Other AI systems have been developed to safeguard for confidentiality.   

Recognizing this, the Guidelines say that participants “should not submit confidential information to any AI tool without appropriate vetting and authorization.”  The commentary advises that, “[b]efore using an AI tool, participants should assess the confidentiality policies, features, and limitations of the tool, engaging technical experts as appropriate.”

Guideline 3: Disclosure

The draft Guidelines provided alternative approaches to disclosure.  The first required disclosure of use of AI when  “(i) the output of an AI tool is to be relied upon in lieu of primary source material, (ii) the use of the AI tool could have a material impact on the proceeding, and (iii) the AI tool is used in a non-obvious and unexpected manner.” 

The alternative approach required disclosure whenever AI was used to prepare material documents or when use of AI could have a material impact on the outcome of the proceedings.

A single disclosure Guideline has now been issued after SVAMC received comments on the draft.  It does not require mandatory disclosure of all use of AI, but instead requires a case by case analysis.  It also says what needs to be disclosed when disclosure is required.  It reads:

Disclosure that AI tools were used in connection with an arbitration is not necessary as a general matter. Decisions regarding disclosure of the use of AI tools shall be made on a case-by-case basis taking account of the relevant circumstances, including due process and any applicable privilege.  Where appropriate, the following details may help reproduce and evaluate the output of an AI tool: 

1. the name, version, and relevant settings of the tool used;

2. a short description of how the tool was used; and

3. the complete prompt (including any template, additional context, and conversation thread) and associated output.

             Guideline 4: Duty of competence or diligence in the use of AI

Of course, counsel must follow all applicable laws or rules on use of AI.  And they must also be sure that all AI generated material is accurate.  They are responsible for any uncorrected errors in submissions. 

The commentary notes that “[t]he tribunal and opposing counsel may legitimately question a party, witness, or expert as to the extent to which AI tool has been used in the preparation of a submission and the review process applied to ensure the accuracy of the output.”

 Guideline 5: Respect for the integrity of the proceedings and the evidence

This Guideline is short but wide-reaching.  It says:

Parties, party representatives, and experts shall not use any forms of AI in ways that affect the integrity of the arbitration or otherwise disrupt the conduct of the proceedings.

The commentary specifically references the dangers of deepfakes, including the expense and difficulty in detecting them.   

If Arbitrators determine this Guideline has been violated, they can take appropriate action, including “striking the evidence from the record, or deeming it inadmissible), deriving adverse inferences, and taking the infringing party representatives’ conduct into account in its allocation of the costs of the arbitration.

Guideline 6: Non-delegation of decision-making responsibilities

AI can be helpful in gathering and analyzing information, but Arbitrators must not delegate actual decision making to AI.  If they decide to use AI, they must assure its accuracy.  And arbitrators mut use their own judgment in making decisions. 

             Guideline 7:  Respect for due process

This Guideline is also directed to arbitrators.  It says:

An arbitrator shall not rely on AI-generated information outside the record without making appropriate disclosures to the parties beforehand and, as far as practical, allowing the parties to comment on it.

Where an AI tool cannot cite sources that can be independently verified, an arbitrator shall not assume that such sources exist or are characterized accurately by the AI tool.

 The Guideline promotes transparency through disclosure and reminds arbitrators to crucially evaluate information derived from AI to ensure accuracy.

             Incorporating the Guidelines in your next arbitration

You may want to adopt SVAMC’s AI Guidelines to control use of AI in your arbitration.   The Guidelines have suggested language for doing that.  Here it is:

The Tribunal and the parties agree that the Silicon Valley Arbitration & Mediation Center Guidelines on the Use of Artificial Intelligence in Arbitration (SVAMC AI Guidelines) shall apply as guiding principles to all participants in this arbitration proceeding.

 Summing up

 The SVAMC AI Guidelines are well worth reviewing in depth. They explain the potential dangers of use of AI in arbitration.  And they provide guidance on how to avoid those dangers and on participants’ responsibility for doing so.

 

 

Previous
Previous

Redesigning Design Patent Validity: the Federal Circuit changes the obviousness test

Next
Next

Can a Robot be an Inventor?  New Guidance from the Patent Office