Header image

No disputing the power of AI: An update on the use of artificial intelligence in dispute resolution

This article considers the impact of artificial intelligence ("AI"), both predictive and generative, on the legal profession and how it is best utilised in the context of commercial disputes.

Types and uses of AI in litigation

In the early days of AI use in dispute resolution, the focus was on predictive AI, which forecasts future events or outcomes based on historical data. It uses various statistical techniques and machine learning algorithms to analyse past data and identify patterns that can be used to make predictions and its primary goal is to provide insights and foresight to help in decision-making processes.

Of potential greater significance is the more recent use of generative AI ("GenAI"), which focuses on creating new content or data that is similar to the existing data it has been trained on. It uses models that can generate text, images, music, and other forms of content and its primary goal is to produce new, original outputs that are indistinguishable from human-created content.

Also worthy of note is the growth in agentic AI ("Agentic AI"), which refers to AI systems that are able to execute tasks and make decisions with limited or no human input, in order to achieve specific goals. This technology is not yet being used widely in the management of commercial disputes, and it may initially be more suited to volume claims handling tasks. However, Agentic AI is becoming increasingly important in the wider economy, particularly in the fintech sector, and it may yet evolve to provide useful tools in the management of dispute resolution.

Overall, AI is now firmly established in the modern arsenal of dispute resolution management tools and there are a myriad of ways in which AI can assist litigators, provided it is subject to appropriate oversight.

Legal research: AI can assist in searching for, and summarising, relevant case law and legislation, and considering whether, and how, any specific elements of the factual matrix in a client's case will affect their application.

Litigation aides: AI can assist with the analysis and extraction of information from key documents to create useful litigation aides, such as chronologies, dramatis personae and case summaries.

Litigation strategy: AI can assist with the development of litigation strategy in a number of ways, including by analysing the activities of other players in the relevant market, monitoring and analysing online platforms to gauge public sentiment and the potential for collective action, and analysing the available financials of (i) potential claimants to determine the likelihood of litigation being pursued to trial, and (ii) potential defendants to determine the likelihood of recovery, from both a damages and a costs perspective, in the event of success. Perhaps most significantly, it can also inform predictions as to application and trial outcomes based on inputs such as the nature of the case, the jurisdiction, the judge or master's background, previous decisions made by the judge or master and/or in that court division, and the legal arguments presented. It can also analyse the language used by a judge or master during a hearing or trial to provide an indication as to the likely outcome (which in turn can inform settlement negotiations prior to judgment being handed down).

Document review/ disclosure: AI can assist with the review of large volumes of documentation for relevance and privilege, which significantly reduces the time lawyers must spend reviewing those documents. A number of e-disclosure platforms have incorporated technology assisted review or "TAR" functionalities which can learn from coding decisions made by reviewers to (i) suggest or determine how further documents in the review set should be coded; and/or (ii) push documents which are likely to be of most relevance to the front of the review pool. AI can also assist in identifying discrepancies between the content of disclosed documents and factual accounts provided by witnesses. New GenAI functionalities built into review platforms are likely to further speed up the review process, reduce the costs of completing document review exercises, and reduce the risk of missed and inadvertent disclosure.

Relativity, one of the most common document review platforms, has a built-in GenAI tool, Relativity aiR, which has three components:

  • aiR for Review: This is intended to accelerate manual document review by predicting relevance, assigning issues, and identifying key documents. It will likely supplant first-level review in many cases, with results being quality controlled by human reviewers.
  • aiR for Case Strategy: This allows users to ask questions across the entire document pool (as opposed to document-by-document) and is also designed to facilitate timeline creation and assist with preparation of case summaries and witness statements.
  • aiR for Privilege: This assists with identifying privileged documents and preparing privilege logs.

Document organisation and case management: AI can assist with the organisation, categorisation and indexing of documentation provided by clients prior to disclosure, and also (by way of automated e-bundling programmes) with the preparation of bundles for trial and interim hearings.

Drafting: AI can be used to prepare first drafts of case updates, correspondence and other case-related documentation. Certain automated programmes can also produce minutes of meetings and interview transcripts.

Case management and cost budgeting: AI can be used to predict the likely costs of a litigation case based on various inputs including the complexity of the case, the number of parties involved, the expected length of the trial, and the historical costs of similar cases, which informs cost budgeting. It can, additionally, be used to monitor any agreed budget and provide alerts when spending is approaching or exceeding the allocated budget, which in turn can assist legal teams to make informed decisions about resource allocation and avoid cost overruns.

Potential risks

Whilst the applications of AI identified above can support the efficient allocation of legal resources, reduce overall legal costs for clients and increase the pool of data available to inform strategic decision making, it remains a tool which must be utilised appropriately and with due caution. The reality is that while AI has the capability to complete certain tasks reasonably well, those providing the instructions must be suitably trained and the output must be carefully reviewed by a 'human in the loop'.

Lack of integration with reliable legal sources: AI tools are typically not integrated with reliable sources of legal knowledge and subscription based legal content (for example, legal research providers such as Practical Law and Westlaw) such that they may not be fully up to date or specific to the relevant jurisdiction. Indeed, many legal research providers and other sources have licenses in place which prohibit the use of their materials in AI tools (although these legal research providers do provide some AI tools integrated within their respective research platforms which subscribers can access). It is also important to note that the output from AI tools is only likely to be meaningful insofar as the information the tool is drawing from is current.

Use of data and client preferences: Clients may have bespoke terms and conditions in place governing the use (or non-use) of AI tools on their matters. These terms are often concerned with the use of confidential data and, for this reason, many law firms have invested significant resources in creating secure systems intended to avoid the risk of data being accessed by third parties. Even if specific terms are not in place, generally duties of confidentiality and data privacy remain. Any practitioners using ChatGPT or similar open resources to analyse confidential client data are exposing themselves and their clients to significant risk.

Human involvement and other approaches to reducing potential risks, including hallucinations: There are a number of well-publicised examples of lawyers using AI to prepare legal submissions which included hallucinations (i.e. outputs that are not wholly grounded in reality or factual data) such as fictitious case references. However, as the Master of the Rolls commented at a recent LawtechUK Generative AI Event "[we] should not be using silly examples of bad practice as a reason to shun the entirety of a new technology". There are a number of ways in which these risks can be mitigated and ultimately the most effective strategy is human review. However, increasingly AI models are being refined and fine-tuned to enable them to follow instructions better and to reduce the risks of hallucinations. Underlying AI technology is also moving towards 'thinking' models which can better plan and execute actions through the use of reliable tools, data and documents, rather than independently generating output. The risk of hallucinations is also reduced to the extent that AI tools are used to analyse information and/or documents which have been supplied by the user (known as 'grounding'), and follow-up prompts can also assist in this regard. Careful human review of AI output is still required as a final layer of risk mitigation but, as trust grows and the systems are further refined, there will be scope for reducing the need for human review in future to maximise efficiency. It seems unlikely, however, that some level of oversight could sensibly be wholly removed in the foreseeable future.

Conclusion

The use of AI in litigation proceedings and dispute resolution more broadly has moved on significantly from the early days of predictive AI being used for disclosure reviews and generative AI being the subject of dismissive watercooler anecdotes about hallucinations. It is here to stay and can provide a powerful advantage for well-informed dispute resolution practitioners given the differing levels of implementation across the legal sector.

There is, however, no overarching regulation of AI in the UK as yet and there remain traps for the unwary. Lawyers must exercise caution in their approach to providing instructions and inputting data as well as the review and use of the output of AI solutions.

Use of AI in Dispute Resolution at Stephenson Harwood

At Stephenson Harwood, we have extensive experience of using predictive AI and GenAI to support the delivery of our legal services, both in the context of litigation and beyond. In the litigation context, we have found GenAI to be of particular assistance for legal research, preparation of litigation aides, document review/disclosure and drafting. We will also be introducing aiR for Review (which is the only strand of Relativity aiR currently live in the UK) for use on our matters. We utilise SHarper AI and Harvey as general AI assistants and we are evaluating the new Relativity GenAI module. We have implemented a dedicated AI policy, tailored GenAI risk training for lawyers, and also run client workshops to help share our experiences of evaluating, implementing, and managing GenAI. Additionally, we have the capability to build custom GenAI solutions tailored to solve specific issues.

Share Article

Related Expertise

Contributors