LONDON (AP) — England’s 1,000-year-old legal system — still steeped in traditions that include wearing wigs and robes — has taken a cautious step into the future by giving judges permission to use artificial intelligence to help produce rulings.
The Courts and Tribunals Judiciary last month said AI could help write opinions but stressed it shouldn’t be used for research or legal analyses because the technology can fabricate information and provide misleading, inaccurate and biased information.
“Judges do not need to shun the careful use of AI,” said Master of the Rolls Geoffrey Vos, the second-highest ranking judge in England and Wales. “But they must ensure that they protect confidence and take full personal responsibility for everything they produce.”
At a time when scholars and legal experts are pondering a future when AI could replace lawyers, help select jurors or even decide cases, the approach spelled out Dec. 11 by the judiciary is restrained. But for a profession slow to embrace technological change, it's a proactive step as government and industry — and society in general — react to a rapidly advancing technology alternately portrayed as a panacea and a menace.
“There’s a vigorous public debate right now about whether and how to regulate artificial intelligence,” said Ryan Abbott, a law professor at the University of Surrey and author of “The Reasonable Robot: Artificial Intelligence and the Law.”
“AI and the judiciary is something people are uniquely concerned about, and it’s somewhere where we are particularly cautious about keeping humans in the loop,” he said. “So I do think AI may be slower disrupting judicial activity than it is in other areas and we’ll proceed more cautiously there.”
Abbott and other legal experts applauded the judiciary for addressing the latest iterations of AI and said the guidance would be widely viewed by courts and jurists around the world who are eager to use AI or anxious about what it might bring.
In taking what was described as an initial step, England and Wales moved toward the forefront of courts addressing AI, though it's not the first such guidance.
Five years ago, the European Commission for the Efficiency of Justice of the Council of Europe issued an ethical charter on the use of AI in court systems. While that document is not up to date with the latest technology, it did address core principles such as accountability and risk mitigation that judges should abide by, said Giulia Gentile, a lecturer at Essex Law School who studies the use of AI in legal and justice systems.
Although U.S. Supreme Court Chief Justice John Roberts addressed the pros and cons of artificial intelligence in his annual report, the federal court system in America has not yet established guidance on AI, and state and county courts are too fragmented for a universal approach. But individual courts and judges at the federal and local levels have set their own rules, said Cary Coglianese, a law professor at the University of Pennsylvania.
The guidance shows the courts’ acceptance of the technology, but not a full embrace, Gentile said. She was critical of a section that said judges don't have to disclose their use of the technology and questioned why there was no accountability mechanism.
“I think that this is certainly a useful document, but it will be very interesting to see how this could be enforced,” Gentile said. “There is no specific indication of how this document would work in practice. Who will oversee compliance with this document? What are the sanctions? Or maybe there are no sanctions. If there are no sanctions, then what can we do about this?”
In its effort to maintain the court's integrity while moving forward, the guidance is rife with warnings about the limitations of the technology and possible problems if a user is unaware of how it works.
At the top of the list is an admonition about chatbots, such as ChatGPT, the conversational tool that exploded into public view last year and has generated the most buzz over the technology because of its ability to swiftly compose everything from term papers to songs to marketing materials.
The pitfalls of the technology in court are already infamous after two New York lawyers relied on ChatGPT to write a legal brief that quoted fictional cases. The two were fined by an angry judge who called the work they had signed off on “legal gibberish.”
Because chatbots have the ability to remember questions they are asked and retain other information they are provided, judges in England and Wales were told not to disclose anything private or confidential.
“Do not enter any information into a public AI chatbot that is not already in the public domain,” the guidance said. “Any information that you input into a public AI chatbot should be seen as being published to all the world.”
Other warnings include being aware that much of the legal material that AI systems have been trained on comes from the internet and is often based largely on U.S. law.
But jurists who have large caseloads and routinely write decisions dozens — even hundreds — of pages long can use AI as a secondary tool, particularly when writing background material or summarizing information they already know, the courts said.
In addition to using the technology for emails or presentations, judges were told they could use it to quickly locate material they are familiar with but don't have within reach. But it shouldn’t be used for finding new information that can't independently be verified, and it is not yet capable of providing convincing analysis or reasoning, the courts said.
Appeals Court Justice Colin Birss recently praised how ChatGPT helped him write a paragraph in a ruling in an area of law he knew well.
“I asked ChatGPT can you give me a summary of this area of law, and it gave me a paragraph,” he told The Law Society. “I know what the answer is because I was about to write a paragraph that said that, but it did it for me and I put it in my judgment. It’s there and it’s jolly useful.”