Tech

Judge Bans AI-Generated Filings In Court Because It Just Makes Stuff Up

Chatbots often make stuff up and reproduce human biases.
GettyImages-1133007687
Image: Getty Images

A district judge in Texas released an order on Tuesday that banned the usage of generative artificial intelligence to write court filings without a human fact-check as the technology becomes more common in legal settings despite its well-documented shortcomings, such as making things up. 

“All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being,” the order by Judge Brantley Starr stated. 

Advertisement

This decision follows an incident where a Manhattan lawyer named Steven A. Schwartz used ChatGPT to write a 10-page brief that cited multiple cases that were made up by the chatbot, such as “Martinez v. Delta Air Lines,” and “Varghese v. China Southern Airlines.” After Schwartz submitted the brief to a Manhattan federal judge, no one could find the decisions or quotations included, and Schwartz later admitted in an affidavit that he had used ChatGPT to do legal research. 

Even Sam Altman, the CEO of ChatGPT maker OpenAI, has warned against using the chatbot for more serious and high-stakes purposes. In an interview with Intelligencer’s Kara Swisher, Altman admitted the bot will sometimes make things up and present users with misinformation. 

The ability of Large Language Models (LLMs) like ChatGPT to make things up, which is also known as hallucination, is a problem that AI researchers have been vocal about. In a study by Microsoft researchers to accompany the release of GPT-4, they wrote that the chatbot has trouble knowing when it is confident or just guessing, makes up facts that aren’t in its training data, has no way to verify if its output is consistent with its training data, and inherits biases and prejudices in the training data. 

“These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations,” Starr wrote in his order. “Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth).” 

Starr attached a “Mandatory Certificate Regarding Generative Artificial Intelligence” that attorneys need to sign when appearing in his court. “I further certify that no portion of any filing in this case will be drafted by generative artificial intelligence or that any language drafted by generative artificial intelligence—including quotations, citations, paraphrased assertions, and legal analysis—will be checked for accuracy, using print reporters or traditional legal databases, by a human being before it is submitted to the Court,” the contract states.  

Outside of court, AI has already been proven to spread misinformation. Last week, dozens of verified accounts on Twitter posted about an explosion near the Pentagon alongside an AI-generated image. There was also a Reddit trend in March where people would create fake historical events using AI, such as “The 2001 Great Cascadia 9.1 Earthquake & Tsunami.”