In 2015, the European Court of Human Rights, which rules on alleged human rights violations in the EU, received double the complaints than it did the year before. Many of these were dismissed either because they were improperly completed, covered ground already ruled on by the court, or simply didn't hold water. Just 15 percent of all applications received a judgement from the court in 2015.In short, human rights complaints filed with the court are on an upwards trajectory in Europe, and the court has to sift through thousands of applications to find the few that are worth its time.
Now, a team of researchers from University College London (UCL) has devised an algorithm that can predict whether or not a human rights complaint is legitimate, with 79 percent accuracy. This technology, the researchers say, could automate the human rights pipeline by analyzing applications and prioritizing them for the court's human judges."It's important to give priority to cases where there was likely a violation of a person's human rights," said Nikos Aletras, a UCL computer scientist and co-author of a paper describing the work published on Sunday in PeerJ Computer Science, in an interview."The court has a huge queue of cases that have not been processed and it's quite easy to say if some of them have a high probability of violation, and others have a low probability of violation," added Vasileios Lampos, Aletras' colleague and also a co-author of the paper. "If a tool could discriminate between the classes and prioritize the cases with a high probability, then those people will get justice sooner."The approach used by the team is fairly simple, as far as the quickly advancing field of deep learning goes. They first trained a Natural Language Processing neural network on a database of court decisions, which contains the facts of the case, the circumstances surrounding it, the applicable laws, and details about the applicant such as country of origin. This way, the program "learned" which of these aspects is most likely to correlate with a particular ruling.
Next, the team fed the program human rights court decisions that it had never seen before and asked it to guess the judge's ruling, based on the constituent parts of the court's decision filing. As it turns out, almost every section—from details about the applicant to the bare facts of the complaint—had a similar accuracy rating of around 73 percent. When the AI looked at the court's run-down of the circumstances surrounding cases, however, that accuracy jumped to 76 percent.This is important, according to the researchers, because it indicates that judges' rulings are more closely tied to the circumstances of individual cases than bare facts or dry laws. This is why, they say, humans are still needed to make nuanced decisions about other humans' lives, and why computers still can't be trusted to do the same—yet."It's the same thing as replacing teachers or doctors; it's impossible right now," said Lampos. "Laws are not structured well enough for a machine to make a decision. I think that judges don't follow a specific set of rules when making a decision, and I say that as a citizen and computer scientist. Different courts have different interpretations of the same laws, and this happens every day."Aletras and Lampos admit that a good deal of their work's future promise, in terms of filtering applications, relies on how well court decisions reflect applications in their original state. This is impossible to know at the moment, they said, but they assume that courts have an interest in presenting the facts and circumstances of cases as neutrally as possible, making court decisions a "good proxy" for applications.The next steps are trying out different types of machine learning on the same problem to see if the accuracy can get even higher, they said, and gaining access to human rights court applications.If all goes well, human rights might be upheld in part by machines.
"Laws are not structured well enough for a machine to make a decision"