MANHATTAN (CN) — A New York lawyer whose court filings included fake case citations generated by ChatGPT apologized Thursday afternoon for getting duped by the artificial intelligence tool, but the federal judge overseeing potential sanctions appeared unlikely to show any mercy.
While ChatGPT’s AI-powered chat app reached hundreds of millions of users in the months after it launched on Nov. 30, 2022, as a free application on the website OpenAI, attorney Steven Schwartz told a federal judge on Thursday the newness of the technology some three months after its online debut led him to falsely assume it was a “super search engine,” while researching cases in an underlying civil personal injury litigation.
“It just never occurred to me that it would be making up cases,” Schwartz testified, explaining that he was unable at the time suspend disbelief that ChatGPT could generated totally fabricated responses to his research inquiries.
“My reaction was: ChatGPT is finding that case somewhere, maybe it’s unpublished, maybe it was appealed, maybe access was difficult to access,” he said. “I just never thought it could be made up.”
Schwartz, who has worked for Manhattan law firm Levidow, Levidow & Obermam for three decades, apologized repeatedly during his emotional reading of a formal statement before Senior U.S. District Judge P. Kevin Castel.
“I deeply regret my actions,” Schwartz said in court on Thursday. “I have suffered both professionally and personally due to the widespread publicity. I am both embarrassed and humiliated and extremely remorseful. To say that this has been a humbling experience would be an understatement.”
Schwartz testified he’s “never come close to being sanctioned in any case or tribunal.”
He also noted having taken a continuing legal education course in artificial intelligence.
The lawyer's attorneys, Ronald Minkoff and Tyler Maulsby from Frankfurt Kurnit Klein & Selz, each argued that Schwartz made a careless mistake and should have noticed the red flags along the way but shouldn’t be accused of acting in bad faith.
“There has to be actual knowledge that Mr. Schwartz knew he was providing bad cases ... or that ChatGPT would be providing bad cases,” Maulsby said.
U.S. District Judge Castel did not rule on punishment on Thursday.
The George W. Bush-appointed judge promised a forthcoming written decision but hinted that the lawyer’s use of ChatGPT was only “the beginning of the narrative, not the end.”
“I doubt we would be here today if the narrative ended there,” the judge said before adjourning on Thursday afternoon.
“It’s not fair to pick apart people's words, but I’ll just note that his has been repeatedly described as a mistake,” he continued. “I understand why it’s framed that way. The mistake was to have submitted the brief on March 1st that cited nonexistent cases, but that’s not what this is all about, that’s part of what it’s about.”
During the hearing on Thursday, Judge Castel prodded Schwartz and his Levidow, Levidow & Obermam colleague Peter Loduca on the timeline of their formal responses to the court after being ordered in early May to show cause why he ought not be sanctioned.
The release of ChatGPT last year captured the world’s attention because of its ability to generate human-like responses based on what it has learned from scanning vast amounts of online materials.
With concerns emerging, European lawmakers have moved swiftly in recent months to add language on general AI systems as they put the finishing touches on the legislation.
In the wake of publicity about Schwartz’s case, a Texas judge issued an order last week banning the use of generative artificial intelligence to write court filings without additional fact-checking conducted by an actual person.Follow @jruss_jruss
Read the Top 8
Sign up for the Top 8, a roundup of the day's top stories delivered directly to your inbox Monday through Friday.