The second most senior judge in England and Wales pondered the potential and pitfalls of AI in a recent speech at the Manchester Law Society.

In a talk entitled AI – Transforming the work of lawyers and judges, Sir Geoffrey Vos insisted:

“…there is nothing scary about AI. It is just a technological tool that has, by the way, been around for years. You use it happily every time you pick up your smart phone. What is scary, as always, is a very small number of ill-intentioned people. Such people might use AI inappropriately if we do not protect ourselves properly, and build in human controls. But that is not really any different to other technological developments that history has produced.”

As Master of the Rolls, Sir Geoffrey is junior only to the Lord Chief Justice, head of the Judiciary of England and Wales.

In his Manchester speech, Sir Geoffrey warned that the use of AI:

“…may rapidly become necessary in order to perform workplace duties.”

Therefore, he continued:

“…it is, in my view, incredibly important that lawyers and judges get to grips with new technologies in general and AI in particular. AI is changing the way things are done in every conceivable sector of the global economy and the legal sector is no exception. The problem is that some lawyers and judges are, even now, hoping that they will be able to retire before they have to, as you might say: “get with the program”.

AI in the courts

AI has been the hottest of technology topics over the last few years, ever since Californian company OpenAI unveiled ChatGPT, its popular but controversial text generating software, in November 2022. Since then, the technology has developed at a very rapid pace and AI tools have already found a home in legal firms. Legal AI tools include:

  • Automated contract revisions
  • TAR – the technology-assisted review of case documents.

In December 2023, the Judiciary of England and Wales issued, for the first time, non-binding guidance on the use of AI by judges, who have been slower to adopt the technology than other legal practitioners. The guidance highlighted the usefulness of AI for summarising long texts, and it may also help hard-pressed judges save time when composing emails, case memoranda or presentations. But it goes on to advise that AI not be used for legal research or analysis because it is not sufficiently reliable, nor capable of legally sound reasoning.

Sir Geoffrey Vos explained that the guidance was intended to help “judges at all levels understand what [AI] does, how it does it and what it cannot do.”

Controversies and hallucinations

ChatGPT is a so-called chatbot, because users interact with the software by typing out questions and making requests. It is the exemplar of a form of AI called generative AI, which is, as the name suggests, designed is to generate content of various kinds.

Generative AI’s multitude of uses, from retrieving information to writing letters to generating computer code, has attracted a great deal of interest, but this has not always been balanced by a clear understanding of how the software works.

Generative AI actually works on similar principles to the predictive text technology found within word processing applications. These so-called ‘large language models’, or LLMs, analyse huge bodies of textual data and then use the results of this analysis to predict the likeliest next word in a sequence. This can produce a convincing simulacrum of human-composed text, but there are bear traps for the unwary that go beyond a lack of depth or sound reasoning. Some early adopters of generative AI who up to that point were more accustomed to Google, Bing and other search engines, were tripped up by the curious phenomenon of the ‘hallucination’, in which the AI’s predictive powers run out of control, leading the LLM to confidently present completely fictional information as fact. The potential risks therefore to the legal sector which is dependent on facts and good judgment, are plain to see.

Responsible use

In his recent speech, Sir Geoffrey highlighted potential pitfalls for lawyers when making use of AI:

“…lawyers and judges must not feed confidential information into public LLMs, because when they do, that information becomes theoretically available to all the world. Some LLMs claim to be confidential, and some can check their work output against accredited databases, but you always need to be absolutely sure that confidentiality is assured.”

In addition, he continued;

“…when you do use a LLM to summarise information or to draft something or for any other purpose, you must check the responses yourself before using them for any purpose. In a few words, you are responsible for your work product, not ChatGPT.”

But, Sir Geoffrey added:

“..none of that means that we should forsake new technologies and the benefits they bring. Many fear, as I have said, that they pose threats to the way things have always been done. And they really do. But the simple fact is that we would not be properly serving either the interests of justice or access to justice if we did not embrace the use of new technologies for the benefit of those we serve.”

Sir Geoffrey’s speech is available to read in full here.