For those who have worked through the slow and steady technological changes in the world of litigation in the past two decades, things are about to take a dramatic turn. With the arrival of the already ubiquitous ChatGPT, a mere forerunner to the AI large language models, we can expect, in the near future, the possibilities of how AI will revolutionise commercial dispute resolution are only just becoming possible to conceive.
AI, in the form of relatively rudimentary machine learning algorithms are already widely used to assist with document review and eDiscovery. Their utility in these often time-consuming and tedious tasks is obvious. The ability to automatically associate apparently unrelated documents, identify patterns and accelerate the process of review by providing a positive feedback review loop has dramatically increased the amount of progress that can be made and level of certainty that can be achieved on disclosure projects in a given time.
Given the ability of AI algorithms to quickly and accurately sift through vast numbers of documents, it is no surprise that inroads have also been made in recent years to use the technology to review contractual documents for key terms and errors. Again, when tasked with a detailed but relatively narrow task supervised by humans, AI has already proven itself up to the task of accelerating the identification of problems and accordingly streamlining the process of finding solutions whilst avoiding serious misjudgements.
A more innovative area of AI that has begun to advance into litigation and case funding has been the development of case review algorithms that use natural language processing and machine learning to analyse data from past cases against a proposed future case, producing broad predictions about the likely outcome of cases based on their proposed submissions and evidence. From a client’s perspective, this may appear to be the perfect lawyer!
Of course, no AI algorithm (or King’s Counsel) can guarantee the outcome of a case, but the use of AI in this way can help lawyers and ultimately their clients to make more informed decisions about how to best plead a case, or at what level to offer settlement.
The common thread between these three main areas of AI implementation in the legal sector is that they are supportive of the roles of lawyers. They accelerate and supplement the work of the lawyer rather than assuming an oversight or tactical role. They don’t remove the discretion of the lawyer, or contradict their advice. This could well be about to change with the advent of the technology behind ChatGPT.
ChatGPT is, in its own words, “a type of artificial intelligence known as a language model… based on a deep learning neural network architecture trained on a vast amount of text data from various sources”. It is ‘trained’ on vast amounts of data which it uses to effectively predict the correct response to a given question or ‘prompt’. The accuracy and utility or otherwise of the response is based to a very significant degree on (a) the precision and framing of the prompt, and (b) the quality of the underlying training data (which at present only goes up to 2021). The system has no access to live data but it constantly learns as it is used.
As anyone who has already toyed with this new breed of AI will see, the technological leap demonstrated by ChatGPT is enormous and will fundamentally alter many, if not most, industries. In the legal sector, there is bound to be a shift from AI as a mere assistant or time-saving tool to AI as a leading voice in determining tactical advice in any given case. At what point does your client choose to follow AI advice over your own?
Indeed, there can be little doubt that there will be a future in which AI systems are deployed in a judicial capacity. One can speculate as to how this may come into being, but with proper regulation, for low value or high-volume cases, there could undoubtedly be a role for an AI judge in an AI court of first instance, applying a level of consistency that human judges may find it difficult to equal. Similarly, use of an AI case assessor as a matter of pre-action conduct could become standard practice in the same way that mediation is today.
A limitation that AI is likely to come up against, at least in the near future, is that AI technologies lack the ability to be truly innovative in producing solutions. Whereas humans have the ability to imagine untried and untested, never before seen solutions based on no apparent prior experience, at present AI is effectively limited by the sum of its training.
Nevertheless, as AI becomes further embedded in litigation, law firms will need to pivot towards adopting AI in their working practices in order to remain competitive. This will require a significant investment in technology and training, as well as a cultural shift towards embracing new technologies. Firms will need to consider the following:
- Client communication: Clients will be concerned about the use of AI in their legal work. Its use must be communicated such that clients appreciate that AI offers benefits to their case, whilst being given the security that it remains human led.
- Ethical concerns: Obvious concerns arise surrounding privacy, bias and accountability. Bias is a particularly difficult factor to identify in AI systems, which often lack the transparency to show how relevant factors have been weighed and evaluated. A ‘show your working’ policy could be adopted so AI output can be evaluated before it forms part of decision making. Obvious issues concerning the Equality Act 2010 arise, for example.
- Quality control: The accuracy of the output from AI systems is always likely be tied to the relevancy or accuracy of the prompt. Fee earners will have to learn new skills to maximise the accuracy of their AI outputs and evaluate their utility in any given scenario.
- Training and education: Staff using AI tools need to be trained and educated on how to use them effectively, recognising their limitations and capabilities as well as interpreting their outputs.
- Data security: to obtain meaningful outputs, AI tools will require access to sensitive data and information. Robust measures that align with the firm’s GDPR and other data security obligations must be put into effect. The Information Commissioner’s Office (ICO) has prepared useful guidance papers on AI and data protection.
- Liability: AI is not infallible. Consideration will need to be given to appropriate insurance cover and whether there can be any recourse against the AI supplier.
- Regulatory compliance: The commercial use of AI is still largely unregulated, albeit the SRA has published guidance on the use of AI in the legal sector. We can expect significant developments in the coming years as regulators grapple with an exponential increase in the power and use of new AI tools.
There is little doubt that advances being made in AI at this time are revolutionary. How the legal sector now adapts to these new and rapidly evolving technologies will determine how that revolution pans out. The change will also profoundly affect all of our clients, who will face unfamiliar territory, new disputes and difficulties. So, whilst challenges exist for law firms seeking a competitive edge with these new technologies, so do great opportunities to help clients through the risks and uncertainties it will create.
In the words of ChatGPT: “They say AI will revolutionize the legal industry, but let’s be real, we’ll still need lawyers to blame the glitches on”.