Artificial intelligence robots have already acted like lawyers, helping 160,000 people escape fines from parking tickets. So it was only a matter of time before robots would also step into the role of judge.

As revealed in a study published Monday in the journal PeerJ Computer Science, a team of British and American researchers used an A.I. system to predict the outcomes of human rights trials. Of the 584 cases the system studied, it came to the same conclusion as the judges 79 percent of the time.

The cases dealt with Articles 3, 6, and 8 of the European Convention on Human Rights, which entail torture, right to a fair trial, and right to privacy, respectively. The texts of judgments from the European Court of Human Rights are neatly broken down into several sections, with one containing the facts of the case, another containing summaries of the parties' arguments, etc., which makes them great candidates to be studied by machines.

The A.I. used algorithms and machine learning to study each case and compare it with previous results, analyzing topics and case details, plus words and phrases that were commonly associated with certain verdicts. For example, "injury," "overcrowding," "beaten," and "ill" had a high correlation with guilty verdicts in torture cases. Words that had high correlations with not-guilty verdicts included "asked," "release," "constitutional," and "responsible."

In the end, the verdicts rendered were on par with the courts' judgments in nearly four out of five cases. Unfortunately, the study didn't use a control group of human analysts for comparison purposes, so it's hard to know if this is much more accurate than an average human would be.

So, should judges fear for their jobs? Not exactly, the authors say--or at least, not yet. Instead, they see A.I. as a "useful assisting tool" for the court, so that cases more likely to produce a violation verdict could be prioritized. "This may improve the significant delay imposed by the court and encourage more applications by individuals who may have been discouraged by the expected time delays," the authors wrote.

At this point, human intervention is still necessary even when using the A.I. system. Before each case can be fed to the computer, its text must be converted to entirely lower case, and common words with little significance, like "the," "a," and "of," need to be removed. There are also obvious limitations to using tech to make judgments in cases involving humans, like unprecedented scenarios the system can't understand or other subtle details that might be missed.

Some lawyers already use software that can scan cases for certain concepts and determine which documents or previous cases might be relevant.

The study's authors say this is the first time case outcomes of a major court have been predicted by machines using only text.

Published on: Oct 25, 2016