DALLAS (CN) — A Texas federal judge is requiring attorneys in cases before his court to pledge they did not use artificial intelligence to draft their documents, warning the programs “make stuff up.”
U.S. District Judge Brantley Starr in Dallas mandated the certification on Wednesday, becoming one of the first judges in the country to do so.
“All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being,” states the website for the U.S. District Court Northern District of Texas. “These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them.”
Business leaders have grappled with how to regulate the ethics of using generative AI since its widespread introduction last year. The programs are able to quickly generate text and images through conversational prompts that learn the patterns of the data being input over time. Critics argue that misuse and abuse of the technology can spread misinformation and deepfakes to manipulate large numbers of people.
Starr, a Donald Trump appointee, explained the AI programs are “prone to hallucinations and bias” in their current state, including making up quotes and citations. He warned the programs are not bound by oaths to the law or the truth that attorneys are sworn to.
“Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle,” the judge wrote. “Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why.”
Starr warned that violators of the mandate will face sanctions under Rule 11 of the Federal Rules of Civil Procedure.
The judge’s new mandate comes one week after a lawyer admitted in New York federal court to using ChatGPT to write a filing in a personal injury case that cited six court decisions that do not exist. Steven Schwartz with Levidow, Levidow & Oberman wrote in a sworn affidavit that he “greatly regrets” using AI in such a manner, had no intent to deceive the court and will never use the programs in the future “without absolute verification” of authenticity.
Schwartz faces a hearing on possible sanctions on June 8 before U.S. District Judge P. Kevin Castel in Manhattan.Follow @davejourno
Read the Top 8
Sign up for the Top 8, a roundup of the day's top stories delivered directly to your inbox Monday through Friday.