(CN) — Two scholars called for inundating artificial intelligence with a legal education, warning on Thursday that without regulations targeting AI, courts may find it increasingly difficult to control illegal activities — particularly if an AI becomes a corporation in itself.
The regulatory analysis in Science comes from Vanderbilt University law professor Daniel Gervais and Stanford CodeX fellow John Nay, who warn that a legal singularity is afoot with artificial intelligence.
For the first time in history, the authors write, “nonhuman entities that are not directed by humans may enter the legal system a new ‘species’ of legal subjects,” whereas legal subjects have been traditionally limited to people and nonhuman entities like animals, rivers or anything that addresses human interests and obligations, like corporations.
The authors say this is because law is a human invention prescribed by language — another distinctively human invention. However, “human language is no longer distinctive to humans,” with the development of AI that can outperform people on at least 150 cognitive tests.
Gervais explained in an interview that what prompted the analysis is the widespread assumption that AI is unable to understand human law, a notion he challenged. He anticipates advanced autonomous AI that could, one day, register itself as a zero-member limited liability company — making it difficult for the current legal system to punish illegal AI activity.
While such a scenario might sound like science fiction, the authors point out that Black’s Law Dictionary refers to corporations as “artificial persons,” which is what AI would be if it takes corporate form, they say.
The authors also note that many U.S. jurisdictions have lax LLC laws and do not always explicitly require humans to oversee “legal persons.”
“Overall, nothing generally prevents an AI from managing the affairs of a corporate entity,” the authors write. “By law, corporate entities need not have human owners or managers.”
Presently, Wyoming is the only state where zero-membership LLCs are legal, though the authors argue that the state has done nothing to regulate AI corporations and that it’s not inconceivable for an AI to establish its own LLC.
According to Gervais and Nay, the possibility has been made more real thanks to the Uniform Limited Liability Company Act, active in 19 states and in Washington, which does not mandate its provision to dissolve LLCs without members. Even if Wyoming remains the only state with zero-membership LLCs, they argue, an entity could operate nationwide under the international affairs doctrine, which requires courts to rule using the law from the state where an entity is formed.
Complicating matters farther, the authors say that AI can now trade in digital currency settled on blockchains. That would open the doors for AI to operate in a decentralized way — and without the requirement of social security numbers — to produce autonomous organizations that are harder to regulate.
The authors warn that autonomously operating LLCs would be difficult to punish under the law — especially since humans are generally shielded from liability with LLCs, and an AI would need to decide whether to comply with court orders.
However, they note that banning zero-membership LLCs would require a “massive legislative effort worldwide,” hindering technological industries that support many parts of the world.
The authors instead argue that allowing an AI to operate as a legal entity would provide a clear legal subject that courts could target for compensatory damages. At the same time, that strategy creates a straightforward research agenda for machine learning researchers to improve AI governance.
“Lawmakers face an unprecedented challenge: regulating AI that can perform cognitive tasks that until recently only humans could,” the authors write.
The researchers add that while AIs can behave like humans, they cannot be regulated in the same way. Therefore, instilling law-abiding tendencies in AI, particularly when paired with interspecific law for AI, could make AI governance possible.
The authors say that by including advanced AI in the legal system, humans can track AI actions, place guardrails around AI behavior and “guide AI research toward building lawful artificial ‘consciences.’”
“Interspecific law will happen, but it is impossible to predict where on the spectrum we will end up,” the authors write, explaining that on one end, interspecific law means adapting corporate law to the operations of corporate entities with partial human control. On the other, they say it means “adapting the legal system to everyday interactions with autonomous, intelligent entities,” where prosaic legal tools are unlikely to work.
“A human law is really for humans. It doesn't apply to cats, it doesn't apply to hurricanes,” Gervais explained. “It doesn't apply to anything other than humans and things that humans operate, like corporations, because a corporation is a piece of paper in some government office. But that doesn't hold anymore. That assumption doesn't hold.”
The authors note that while some scientists have warned against developing AI that is superior to humans, a hard stop on AI development is unlikely amid the demand for innovation, heavy investments and society’s reliance on continued growth. They stress that the options are limited: either attempt regulation by treating AI as legally inferior, or engineer AI systems to follow the law.
“This article is really just a catalyst for conversation. There is a conversation that needs to happen, and we're hoping this will accelerate things a little bit,” Gervais said. “There are states within the U.S. that have already held hearings about this. So, this isn't like complete theory, this is going to happen.”Follow @alannamayhampdx
Subscribe to Closing Arguments
Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.