Updates to our Terms of Use

We are updating our Terms of Use. Please carefully review the updated Terms before proceeding to our website.

Monday, April 15, 2024 | Back issues
Courthouse News Service Courthouse News Service

Sanctions ordered for lawyers who relied on ChatGPT artificial intelligence to prepare court brief

A federal judge said the fines are meant to serve as deterrent in the era of artificial intelligence tools that are already giving way to legal fabrications.

MANHATTAN (CN) — Finding evidence of subjective bad faith, a federal judge ordered two attorneys Thursday to pay $5,000 fines after they submitted legal briefs using bogus case citations invented by the AI chatbot ChatGPT.

Steven A. Schwartz and Peter LoDuca faced sanctions in the Southern District of New York over a filing in a civil personal injury lawsuit against an airline that included references to past court cases that Schwartz assumed were real after they were supplied to him by the artificial intelligence-powered chatbot.

“Many harms flow from the submission of fake opinions,” U.S. District Judge P. Kevin Castel wrote in a 34-page opinion made public on Thursday. “The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents.”

Judge Castel ordered Schwartz and LoDuca, to each pay $5,000. Their firm Levidow, Levidow & Oberman is being held jointly and severally liable for the work of the lawyer who initially stood by the fake opinions in signed affidavits after judicial orders called their existence into question.

Both lawyers are also required to send copies of the sanctions ruling to Roberto Mata, the plaintiff in the underlying personal injury suit, within two weeks, as well also to forward the ruling to each judge whom ChatGPT falsely identified as an author of the six ginned-up opinions.

The sanctions were ordered under Rule 11, to serve as a deterrent, rather than as punishment or compensation.

Discussing the potential harm of attributing fictional conduct to judges, Castel wrote that: “It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”

Just as Judge Castel intimated at a sanctions hearing earlier this month, Schwartz and LoDuca’s legal misconduct was not limited to the use of what the judge called “legal gibberish,” generated by ChatGPT, but also includes the lawyers’ subsequent “shifting and contradictory explanations,” in affidavits submitted after Judge Castel raised the possibility of Rule 11 sanctions.

“Poor and sloppy research would merely have been objectively unreasonable,” the opinion states. “But Mr. Schwartz was aware of facts that alerted him to the high probability that ‘Varghese’ and ‘Zicherman’ did not exist and consciously avoided confirming that fact.”

The two attorneys each admitted to signing off on the bogus case citations at hearing earlier this month.

Schwartz explained that he used the groundbreaking program as he hunted for legal precedents supporting a client’s case against the Colombian airline Avianca for an injury incurred on a 2019 flight.

Schwartz told Judge Castel the newness of ChatGPT’s technology some three months after its online debut led him to falsely assume it was a “super search engine” while researching cases.

“It just never occurred to me that it would be making up cases,” Schwartz testified. “My reaction was, 'ChatGPT is finding that case somewhere.' Maybe it’s unpublished, maybe it was appealed, maybe access was difficult to access. I just never thought it could be made up.”

The sanctions ruling will not compel any formal atonement from the lawyers.

"The Court will not require an apology from Respondents because a compelled apology is not a sincere apology," Castel wrote.

Ronald Minkoff, an attorney for Levidow, Levidow & Oberman firm, said Thursday that he disagrees with the judge's finding that any lawyers at the firm acted in bad faith.

"We have already apologized to the Court and our client," he wrote in a statement. "We continue to believe that in the face of what even the Court acknowledged was an unprecedented situation, we made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth."

Microsoft has invested some $1 billion in San Francisco-based OpenAI, the company behind ChatGPT.

The release of ChatGPT last year captured the world’s attention because of its ability to generate human-like responses based on what it has learned from scanning vast amounts of online materials.

The partnership positions Microsoft to sharpen its competition with Google in commercializing new AI breakthroughs that could transform numerous professions, as well as the internet search business.

With concerns emerging, European lawmakers have moved swiftly in recent months to add language on general AI systems as they put the finishing touches on the legislation.

In the wake of publicity about Schwartz’s case, a Texas judge issued an order last month banning the use of generative artificial intelligence to write court filings without additional fact-checking conducted by an actual person.

Follow @jruss_jruss
Categories / Law, Technology

Subscribe to Closing Arguments

Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.

Loading...