With examples of its use in courtrooms around the world, AI may help judicial systems work better, according to Professor Brian Barry, Associate Professor of Intellectual Property Law at Trinity’s School of Law.   

A key question guiding his research is whether, in this high-stakes environment, people will trust and accept AI as a tool in the administration of justice?

How can AI help?

“It’s fascinating to see how these systems are being rolled out. It’s used for sentencing decisions in the US, Taiwan and Malaysia, it’s being used to determine which cases get heard on appeal in the Columbian constitutional court”, Professor Barry explains, adding that a court in Germany has recently starting using AI to prepare template text for judgements in aviation passenger claims.

Professor Barry, who has experience as a practisting solicitor, says that from an individual’s perspective navigating the justice system is complicated, expensive and resource intensive, and that’s before your case is heard. Where he sees the most obvious use for AI is in how information is provided to potential litigants from the get-go, with a more bespoke or tailored approach.

Brian Barry

 “If we think about it from the administrative point of view, there’s providing information to litigants, there’s case management, scheduling, pre-trial matters, document exchange that AI-enhanced technology could assist with… which sounds incredibly boring but the reality of it is that this is where efficiencies and improvements can be made.”

However, when you move into the decision-making space, things become “extremely interesting from the perspective of procedural justice”, Professor Barry states.

He argues that AI could be used in a limited capacity in assisting judges and testing the robustness of their decisions before they are handed down, a sort of “devil’s advocate”. Crucially, this would not replace judges’ own reasoning and decision-making independence. 

How do judges decide?

The author of How Judges Judge: Empirical Insights into Judicial Decision-Making (Routledge, 2021) argues that there are several “non-legal factors that affect how judges decide cases or perform their role”. The psychological and behavioural elements of judicial decision making range from “cognitive influences” to “biases” and “groupthink” (the subject his 2023 article ‘Judging Better Together: Understanding the Psychology of Group Decision-Making on Panel Courts and Tribunals’). But Professor Barry has become increasingly “fascinated” with the technological factors in judges’ decisions, and particularly “how that can assist or maybe hinder how judges do what they do.”

In Ireland, judicial decisions in a number of recent high profile legal cases have been the subject of extensive media coverage. But for Professor Barry, it’s in the lower tiers of judicial systems where he sees the greatest potential. “Judges are understandably under so much pressure…they have to churn out judgements or decisions extremely quickly with limited resources and thinking time available to them. Those are the very situations where errors can systematically occur. If we don’t have enough time to think something through, we rely on intuition and instinct that bit too much, and that’s when subconscious biases can arise; that’s when cognitive errors can manifest.”

“Finding ways to facilitate judges to think more deeply about the real issues in a case can make the difference between good and poorer quality decision making”, he adds.

What are the risks?

“Although there’s potential for efficiencies, increasing access to justice, maybe even reducing discrimination, there are enormous risks associated with deploying AI”, Professor Barry adds.

AI also displays bias, and there are issues with “how AI is designed in terms of how the data is used and how the algorithms operate within those systems.” Overall because “AI is inherently opaque”, its use could impact on the right to a fair trial and the potential for erosion of judicial independence, Professor Barry warns.

As part of his research, Professor Barry wants to use an empirical approach to find out whether people trust and accept the use of AI in the courtroom, “to establish when people—judges and lawyers within the system but also litigants—may be prepared to use AI systems in a limited way for particular decision-making tasks.”

He notes that the use of AI in the deployment of justice is also dependent on buy-in from the judiciary and the public to ensure the legitimacy and integrity of the justice system.

Judges do not think their roles will be replaced by AI, according to an Irish study on the use of technology in the courtroom led by Professor Barry, but there is concern about “a blurring of lines” in terms of where and how it is currently being used by litigants and their representatives in cases.

“Lawyers are increasingly turning to these tools—both bespoke custom-built systems within their own law firms but also the more general-purpose tools that are freely available. Increasingly we see regulators of the professions around the globe issuing guidelines on the proper and appropriate use of this technology.” All of this taken together, he says, “represents a paradigm shift in how justice is done”.