WASHINGTON (CN) — Members of Congress received yet another call to action Wednesday to regulate artificial intelligence, as a group of elections experts and human rights advocates warned that the developing technology could have dire consequences for democracy.
Lawmakers are in a sprint to pass legislation reining in the nascent AI industry, pointing to concerns about the technology’s effects on many aspects of daily life, such as health care, finance and intellectual property rights. Now, Capitol Hill has turned its attention to how the unchecked use of artificial intelligence could affect one of the most basic building blocks American democracy: free and fair elections.
“With AI, the rampant disinformation we have seen in recent years will quickly grow in quantity and quality,” Minnesota Senator Amy Klobuchar said during a hearing Wednesday in the Senate Committee on Rules and Administration. “We need guardrails to protect our elections.”
Klobuchar, who chairs the panel, has for months led the charge in the Senate to regulate AI. The Minnesota Democrat has cosponsored a bill alongside Missouri Republican Josh Hawley aimed at blocking the proliferation of "deepfake" videos — clips that stitch together AI-generated audio and images of politicians and other public figures.
The senator pointed to several examples of such technology that is already being used to impersonate candidates for political office, such as President Joe Biden and Massachusetts Senator Elizabeth Warren. The danger of those tools lies in their ability to put words in the mouths of public officials and other leaders, she said.
Klobuchar also observed that AI could be used to spread misinformation when it comes time to cast a ballot, such as making up voting locations or providing false information about polling times.
Minnesota Secretary of State Steve Simon, invited to testify Wednesday, echoed Klobuchar’s concerns. While artificial intelligence itself isn’t a threat to democracy, he observed, it can be used in a way that amplifies existing problems.
“We’re talking about an old problem, namely election misinformation and disinformation, that can now more easily be amplified,” Simon said.
Existing election interference schemes could be supercharged by AI, he contended. “In the wrong hands, AI could be used to misdirect intentionally and in ways that are far more advanced than ever.”
However, Simon noted not all misinformation peddled by artificial intelligence comes with malicious intent. The state official pointed to an example of an artificial intelligence tool that provided incorrect answers when asked about Minnesota election law.
“Was that intentional misdirection? Probably not,” Simon said. “Still, it is a danger to voters who may get bad information about critical election rules.”
Further, Simon warned that the prevalence of AI-powered election misinformation could prime Americans to falsely assume genuine information that challenges their preconceived notions has been altered.
“Simply put, the mere existence of AI can lead to undeserved suspicion of messages that are actually true,” Simon said.
Trevor Potter, former chairman of the Federal Election Commission, urged lawmakers to strengthen the government’s power to stop campaigns and other actors from using AI to generate misleading election content.
Under current law, Potter explained, the FEC can stop political candidates from publishing statements falsely attributed to another candidate. “I believe the FEC should explicitly clarify … that the use of AI is included in this prohibition,” he said.
Congress should also expand that ban to include any person, not just political candidates, Potter added.
Committee Republicans, while amenable to the idea of regulating AI’s role in elections, were also concerned about whether tightening restrictions could affect the industry’s development or free speech rights.
“In considering whether legislation is necessary, Congress should weigh the benefits and risks of AI,” said Nebraska Senator Deb Fischer, the panel’s GOP ranking member. “We should look at how innovative uses of AI could improve the lives of our constituents, and also the dangers that AI could pose.”
Pointing to First Amendment protections, Fischer added that the government cannot regulate protected speech. “We must carefully scrutinize any policy proposals that would restrict that speech,” she said. “We need to strike a careful balance between protecting the public, protecting innovation and protecting speech.”
Ari Cohn, a free speech lawyer for technology-focused nonprofit TechFreedom, said while the government should take steps to prevent meddling in the electoral process, restricting political speech “must satisfy the most rigorous constitutional scrutiny.”
Cohn staked out a more moderate position on AI, arguing society is not “on the precipice of calamity brought on by seismic shift.”
“AI presents an incremental change in the way we communicate, much of it for the better,” he argued. “There is simply no evidence that AI poses a unique threat to our political discussion and conversation.”
Any law that prohibits AI-generated election content could unintentionally bar “an enormous amount of protected and even valuable political discourse,” Cohn contended.
Potter meanwhile pushed back on the argument that clamping down on the use of AI in election content would violate the constitutional right to free speech.
“There is no countervailing first amendment right to intentionally defraud voters in election,” he said, “so a narrower law prohibiting the use of AI to deceptively undermine our elections through fake speech would rest on firm constitutional footing.”
Klobuchar, recognizing the potential upsides of emerging artificial intelligence technology, agreed with Fischer that there was a balance to be struck.
“We can harness the potential of AI and its great opportunities,” she said, “while controlling the threats we now see emerging, and safeguard our democracy.”
AI issues are rapidly heating up on Capitol Hill. A cadre of Senate Democrats last week unveiled legislation that, if made law, would force companies that use artificial intelligence to make major business decisions to be more transparent in how and when they employ such technology. The measure is aimed at addressing reports that AI tools can amplify bias in such decisions, lawmakers have said.
Wednesday’s hearing also comes on the heels of a panel discussion on AI regulation between members of Congress and tech executives. Billionaire entrepreneur Elon Musk, who owns an artificial intelligence company, said following the meeting that the U.S. should establish a dedicated federal agency for regulating the technology.Follow @BenjaminSWeiss
Subscribe to Closing Arguments
Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.