Submit your case to an Amnesty International judge

In a 2013 legal case in the United States, a man named Paul Zilly was convicted of stealing a lawn mower.

Initially, he agreed to a plea bargain with a year in prison and a subsequent supervision order. But an early AI tool rated it as a high risk of abuse, and the sentence was extended to two years.

In 2016, the nonprofit investigative website ProPublica researched nearly 10,000 criminal defendants in Florida. It found that African-American defendants were more likely to receive a high-risk false-positive mark on the program than white defendants, suggesting that if Zilly had been white — and the program did not specify his race — the original ruling would have been allowed to stand.

This case is one of the examples featured in a study on the use of artificial intelligence in the legal system released this month by the Australian Institute of Judicial Administration (AIJA), University of New South Wales for Law and JusticeAnd the UNSW Allens Hub for Technology, Law and Innovation The Law Society of New South Wales for the Future of Law and Innovation in the Profession (FLIP stream).

The report – Artificial Intelligence Decision Making and Courts: A Guide for Judges, Court Members and Court Administrators – outlined examples of the use of artificial intelligence in Australia and beyond, ranging from computer-based dispute resolution software to the use of computer code that is directly based on rules-driven logic, or “AI judges”. To help clear the backlog.

In the case of the US program, called COMPAS, the tool aims to bolster the judicial process by making a risk assessment on the potential for an offender to break the law again.

Compass incorporates 137 answers to the questionnaire, from the related “How many times has this person been arrested before as an adult or juvenile?” More generally, “Do you get frustrated sometimes?”

The code and the processes underlying Compass are confidential and unknown to the prosecution, defense or judge but can have real consequences, as the Zelley case illustrates.

Backlog Filter

The Estonian Ministry of Justice says it will seek to clear a backlog of cases using 100 so-called “AI judges,” the aim being to give human judges more time to deal with more complex disputes.

Reportedly, the project can adjudicate small claims disputes of less than 7,000 euros. Conceptually, both parties will upload documents and other relevant information, and the AI ​​system will issue a decision that can be appealed to a human judge.

One of the report’s authors, Professor Lyria Bennett Moses from the University of New South Wales, said.

pull quote Both parties will upload documents and other relevant information, and the AI ​​system will issue a decision that can be appealed to a human judge.

“Artificial intelligence in courts extends from administrative matters, such as automated electronic files, to the use of data-based inferences about specific defendants in the context of sentencing. Judges, court members, and court administrators need to understand the technologies well in a position to ask the right questions about the use of systems Artificial intelligence “.

Professor Bennett Moses suggested that the use of some AI tools “contradicts important legal values”.

“There are tools, frequently deployed in the United States, that ‘score’ defendants about how likely they are to re-offend again. This is not based on the individual psychological profile but on data analysis. If people liked you again in the past, you would be ranked You are more likely to return to criminality.”

The variables used in this analysis include issues such as whether the parents are separated (and, if so, the individual’s age when this occurs) – the types of things that may be statistically associated with abusive behavior but are beyond the individual’s control. The tool is also biased (in some measures of equity) against certain ethnic groups.

Breaking down language barriers

Not all applications of AI in the legal system are harmful.

Professor Bennett Moses said language barriers were one of the main areas where AI could be of immense value.

One practical and uncontroversial example of the advantage is the use of natural language processing to convert the audio of what judges, witnesses in court, and an attorney speak into text.

This can make accessing court transcripts faster and easier, especially for those who are hard of hearing. In China, some experiences are recorded “in real time” in Mandarin and English script.

Professor Bennett Moses said: “I have always believed that interesting legal questions lie in technological frontiers, whether it is related to artificial intelligence or other new contexts that the law is required to respond to.”

“My main advice is to tread carefully, and seek to understand how things work before drawing conclusions about what the law should do about it. But we need people to ask the right questions and help society answer them.”

Lachlan Colquhoun is the Australia and New Zealand Correspondent for CDOTrends and Editor for NextGenConnectivity. He remains fascinated by the way companies are reinventing themselves through digital technology to solve current problems and completely change their business models. You can access it at [email protected].

Photo credit: iStockphoto / style-Photography

Leave a Comment