Tech

A Judge Just Used ChatGPT to Make a Court Decision

The case is the first time a court has admitted to using the AI text generator’s answers in a legal ruling.
Janus Rose
New York, US
A courtroom with black chairs and scales of justice logo on the wall
Thomas Coex / Getty Images

A judge in Colombia used ChatGPT to make a court ruling, in what is apparently the first time a legal decision has been made with the help of an AI text generator—or at least, the first time we know about it.

Judge Juan Manuel Padilla Garcia, who presides over the First Circuit Court in the city of Cartagena, said he used the AI tool to pose legal questions about the case and included its responses in his decision, according to a court document dated January 30, 2023. 

Advertisement

"The arguments for this decision will be determined in line with the use of artificial intelligence (AI),” Garcia wrote in the decision, which was translated from Spanish. “Accordingly, we entered parts of the legal questions posed in these proceedings."

"The purpose of including these AI-produced texts is in no way to replace the judge's decision,” he added. “What we are really looking for is to optimize the time spent drafting judgments after corroborating the information provided by AI.” 

The case involved a dispute with a health insurance company over whether an autistic child should receive coverage for medical treatment. According to the court document, the legal questions entered into the AI tool included “Is an autistic minor exonerated from paying fees for their therapies?” and “Has the jurisprudence of the constitutional court made favorable decisions in similar cases?” 

Garcia included the chatbot’s full responses in the decision, apparently marking the first time a judge has admitted to doing so. The judge also included his own insights into applicable legal precedents, and said the AI was used to "extend the arguments of the adopted decision." After detailing the exchanges with the AI, the judge then adopts its responses and his own legal arguments as grounds for its decision. 

Colombian law does not forbid the use of AI in court decisions, but systems like ChatGPT are known for giving answers that are biased, discriminatory, or just plain wrong. This is because the language model holds no actual “understanding” of the text—it merely synthesizes sentences based on probability from the millions of examples used to train the system. 

ChatGPT’s creators, OpenAI, have implemented filters to eliminate some of the more problematic responses. But the developers warn that the tool still has significant limitations and should not be used for consequential decision-making.

While the case is apparently the first time a judge has admitted to using an AI text generator like ChatGPT, some courts have—controversially—already begun using automated decision-making tools in determining sentencing or whether criminal defendants are released on bail. The use of these systems in courts has been heavily criticized by AI ethicists, who point out that they regularly reinforce racist and sexist stereotypes and amplify pre-existing forms of inequality.

Although the Colombian court filing indicates that the AI was mostly used to speed up drafting a decision, and that its responses were fact-checked, it's likely a sign that more is on the way.