Artificial intelligence is rapidly reshaping dispute resolution, and arbitration is no exception. From document review to drafting submissions, AI tools are already embedded in arbitral practice.
In response, the Chartered Institute of Arbitrators (CIArb) has issued its Guideline on the Use of AI in Arbitration(2025), marking a significant step in regulating how AI is used in proceedings.
For businesses, particularly those operating across borders or in tech-driven sectors, understanding these developments is no longer optional – it is strategic.
What is the CIArb AI Guideline?
The CIArb Guideline is a non-binding “soft law” framework designed to help parties, tribunals and practitioners navigate the use of AI in arbitration.
It does not prohibit AI. Instead, it seeks to:
- Enable efficient and responsible use of AI tools
- Protect procedural fairness and due process
- Safeguard the enforceability of arbitral awards
Importantly, the Guideline is expected to influence practice, much like other soft law instruments (e.g. IBA Rules).
The CIArb framework highlights several critical risks:
- Confidentiality breaches (e.g. uploading documents to unsecured AI tools)
- Algorithmic bias affecting analysis or outcomes
- Enforceability challenges if due process is compromised
- Over-reliance on AI outputs without verification
For businesses, these risks are not theoretical but directly affect the validity of any arbitral award.
Why this matters now
AI adoption in arbitration is accelerating:
- Used for legal research, document review and data analysis
- Increasingly involved in drafting submissions and case strategy
- Expected to deliver time and cost efficiencies
However, this shift raises fundamental concerns around:
- Confidentiality of sensitive materials
- Bias and reliability of AI outputs
- Due process risks and challenges to awards
The CIArb Guideline is designed to address exactly these tensions.
Key principles of the CIArb AI Guideline
1. AI is permitted but not unregulated
The Guideline recognises the benefits of AI but stresses that its use must be carefully controlled and proportionate.
Parties and tribunals are expected to conduct reasonable due diligence before using AI tools, including understanding how they work and their limitations.
2. Human responsibility remains central
A critical principle is that AI must not replace human judgment.
- Arbitrators cannot delegate decision-making to AI
- They must independently verify outputs
- They remain fully responsible for the award
This is essential to preserving the legitimacy of arbitration.
3. Tribunal control over AI use
The Guideline confirms that tribunals have broad powers to regulate AI use, including:
- Requiring disclosure of AI tools used
- Determining whether AI-generated material is admissible
- Appointing experts to assess AI tools
- Restricting or permitting specific technologies
This reflects arbitration’s core procedural flexibility.
4. Party autonomy but with limits
Parties remain free to agree:
- Whether AI can be used
- Which tools are permitted
- How AI-generated outputs are treated
However, where parties disagree, the tribunal can intervene and decide.
5. Transparency and disclosure
Transparency is a recurring theme.
The Guideline encourages (and in some cases requires):
- Disclosure of AI use where it impacts proceedings
- Clarity on how AI outputs are generated
- Opportunity for the opposing party to challenge AI-assisted material
This is key to maintaining equality of arms and procedural fairness.
6. Templates for immediate use
A practical feature of the Guideline is the inclusion of:
- Template AI agreements (for parties)
- Template procedural orders (for tribunals)
These can be incorporated directly into arbitration clauses or proceedings. It is something we expect to see more frequently in contracts going forward.
Practical implications for businesses
AI is already reshaping arbitration in practical terms. We are seeing a clear shift towards incorporating AI-specific provisions into arbitration clauses and procedural frameworks, alongside increased scrutiny of AI-assisted evidence and submissions, which may now be subject to disclosure and challenge.
At the same time, data governance is becoming critical. Businesses must ensure that the use of AI tools does not compromise confidentiality or privilege.
Handled properly, AI offers a genuine strategic advantage, driving efficiency and reducing costs. However, if used without appropriate controls, it can expose parties to procedural challenges and risks that may ultimately undermine the integrity of the arbitration process.
How we can help
At IMD Corporate, we advise clients on complex, cross-border disputes, including those involving emerging technologies.
We can assist with:
- Drafting arbitration clauses addressing AI use and governance
- Advising on AI risks in ongoing disputes
- Strategic positioning in arbitration involving AI-generated evidence
- Ensuring compliance with evolving soft law and institutional guidance
As arbitration continues to evolve alongside AI, early and informed advice is critical.
This article is for general information only and does not constitute legal or professional advice. Please note that the law may have changed since this article was published.