Updates to our Terms of Use

We are updating our Terms of Use. Please carefully review the updated Terms before proceeding to our website.

Monday, April 15, 2024 | Back issues
Courthouse News Service Courthouse News Service

An AI bot has passed the bar exam. What happens next?

While some attorneys are hopeful artificial intelligence can expand legal services for low-income and pro se litigants, other experts worry it will widen the gap between rich and poor.

CHICAGO (CN) — This past winter, an AI system known as GPT-4 passed the Uniform Bar Exam - as the name suggests, a standardized version of the infamously difficult qualifying test for would-be lawyers. It often takes flesh-and-blood attorneys several attempts to pass the exam, and in a way, GPT-4 was no different.

A multidisciplinary team of researchers and attorneys, led by law professor Daniel Katz from the Illinois Institute of Technology's Chicago-Kent College of Law, put multiple iterations of artificial intelligence programs through the bar before GPT-4 took its turn.

In a study the team published earlier this month, they noted that in one of the first attempts, a program known as Text-ADA-001 scored only a dismal 8% on the test's multiple choice section. Other programs floundered when answering essay questions, or failed to grasp different domains of legal practice – civil procedure was especially challenging.

But GPT-4 passed with flying colors in every field and test segment, even outperforming the average human student. Other AIs will be sure to follow. So what does it mean that sci-fi imaginings of machines interpreting our laws have been replaced by a reality where they already have?

Even Katz isn't entirely sure.

"We're at the beginning of something here. The dawn of a new technology," Katz said in an interview. "No one is sure what it's going to mean."

Katz did predict that within a few years, perhaps even by the end of this year, large corporations like Microsoft and Google may start experimenting with offering AI legal services. Nothing that could replace an attorney in a courtroom - yet - but work that is typically performed by law researchers and paralegals, like compiling data or answering clients' questions about the law.

Several attorneys Courthouse News spoke with, including Katz, see this possibility as a net positive. As Katz pointed out, the American legal system's highly privatized nature means that for most people, attorneys outside of public defenders are an unaffordable luxury.

"Most people in this country today can't afford a lawyer," Katz said.

He and others said they hoped AI could extend some basic legal services to those who have historically gone without - immigrants, low-income families and those who file their suits pro se, without the benefit of counsel.

"I'm generally very optimistic about it," said Matthew Shepard, a public defender and board member of the Chicago chapter of the National Lawyers Guild, which often provides pro bono counsel to arrestees and activists. "I think it will be helpful in assisting attorneys with filing motions and going through large quantities of discovery"

"Sounds like it could help out a lot of pro se litigants," agreed Zane Thompson, a workers' compensation attorney with the Chicago law firm Ganan & Shapiro.

But others are wary – not only of the AI itself, but over who will control it. The price tag attached to building and maintaining an AI system can range into six-digit territory or higher every month, and their complexity means that only wealthy corporations have the resources necessary to utilize them en masse.

Tech culture author Wendy Liu and University of Illinois at Chicago computer science professor Abolfazl Asudeh, who has extensively researched fairness in data, both said that large corporations controlling such powerful technology could have negative consequences. One might be the expansion of ongoing digital enclosure - the process by which ever-more goods and services are provided by a shrinking number of massive conglomerates, making society at large increasingly dependent on and subservient to them.

"Who can create these large-language models? Other than big tech companies such as Microsoft, Google, and Amazon, who else has the capacity?" Asudeh said. "So now this technology is great and everyone wants to use it, but only the big corporations have it."

ADVERTISEMENT

This enclosure, in turn, may threaten to even further divide our legal system between rich and poor, Asudeh warned. Chatbot attorney services managed by corporations could become the first and last legal resort for the struggling many, while independent law firms become luxury boutiques for the wealthy few, even more than they already are. Asudeh also argued that at least while the technology is still maturing, AI will struggle to handle the complex criminal and civil cases that money often allows the rich to sidestep.

"If ChatGPT can offer good legal advice, then we should use it," Asudeh said. "But that legal advice is better for people who need it less. Richer people. Marginalized people who need it more, who have more complex cases, will probably not get as good advice."

Katz pushed back against this line of argumentation. He pointed out that GPT-4 excelled at interpreting criminal and constitutional law, with future iterations of the technology likely to improve further. He also argued that we are already enclosed anyway - we buy our furniture from Ikea, we use Google and Microsoft to look up all our data, down to traffic conditions on our daily commutes, and for everything else, there's Amazon. With that being the case, why wring our hands over an improvement on the formula?

"I mean for most people, the de facto lawyer today is a Google search," Katz said. "I don't want to hear this romanticization, because it doesn't add up."

Liu, who has studied and criticized the tech world's culture, most famously in her book "Abolish Silicon Valley," was not convinced.

"It's classic capitalist realism," she said, referencing a theory by the late English philosopher Mark Fisher, who proposed that people living in capitalist countries had lost the ability to conceive of non-capitalist social structures. "Things are already bad, so let's just keep doing what we're doing."

Liu also doubted that large tech companies would put much thought into how legal AIs could be utilized responsibly, so long as they stood to profit off the technology's deployment.

"The tech industry as it is now it not optimized to think about social responsibility," Liu said. "It's almost seen as too bleeding heart to think about the social implications of things."

It doesn't help, she added, that many working in tech oppose the idea of government regulating emergent technologies.

"There's this thought that, 'why should a bunch of 70-year-olds in Washington be in charge of technology they known nothing about?' and there's some truth to that," Liu said.

Maura Grossman, a practicing attorney and professor of computer science at Canada's University of Waterloo, offered one further warning. Should Microsoft or Google or Amazon start offering low-cost AI legal services, whether through a subscription or otherwise, she said it's likely that users' personal data might be one of the service's hidden fees.

"I can't imagine they're not going to sell your data," Grossman said, adding that scraping and trading personal data is already a major source of income for tech companies like Meta.

In the U.S. and Canada, law firms have a duty to protect their clients' confidential information, and are forbidden from using that information to enrich themselves. But wealthy companies may find a way around those protections, Grossman said.

"They might put that into the terms of service, that the reason you're getting this [AI legal advice] for $4 a month is because they're getting your data," she said. "They may use it to train the machine to be better, or they may sell it to third parties."

To prevent that kind of abuse, Katz and Shepard both argued that AIs should only be used as tools under human attorneys' supervision. But Grossman predicted that the technology may actually scare attorneys away should it be dominated by corporate players. No one wants to be on the hook for the first time a corporate AI legal service is sued for malpractice, she said.

"I would not put anything confidential into one of these tools, because there may be a [legal] risk to it," Grossman said.

Still, Katz, Shepard and Thompson remain hopeful. The ability for a machine system to pass the bar exam is undoubtedly impressive, and Katz argued that fear of a new technology's social impacts should not freeze further inquiry or experimentation – especially not when the technology in question could expand access to a service that, while technically all U.S. citizens are entitled to, few have the means to actually benefit from.

"There's promise and peril in every new technology," Katz said.

"I think this could be a massive help for people who are representing themselves," Shepard said, adding in an email that the technology "could enable people who are unable to afford paid counsel to better advocate for themselves in civil matters such as landlord/tenant and unemployment benefit issues."

Asudeh remains skeptical. Even putting aside the concerns large corporations raise, the very human U.S. legal system is still colored by centuries of racism, misogyny and profiteering. An AI is essentially just a logic box, he said, and one working within that biased framework may only exacerbate its inequities.

"When we are talking about these complex models, the algorithm tries to mimic the data," Asudeh said. "And if the data is biased, the output could be biased as well... You think you are fixing the problem, but in the long run you're only making it worse."

Follow @djbyrnes1
Categories / Courts, Law, Science, Technology

Subscribe to Closing Arguments

Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.

Loading...