Updates to our Terms of Use

We are updating our Terms of Use. Please carefully review the updated Terms before proceeding to our website.

Tuesday, April 16, 2024 | Back issues
Courthouse News Service Courthouse News Service

Experts urge swift congressional action to combat election deepfakes

A robocall mimicking President Biden’s voice is only the tip of the iceberg for how AI could be harnessed to hinder democracy, a group of tech professionals warned the Senate.

WASHINGTON (CN) — With a presidential election just months away, threats of disinformation supercharged by artificial intelligence technology — but particularly deepfakes — are more apparent than ever, a panel of experts told lawmakers Tuesday.

“We must treat deepfakes with equal or greater importance as the worst kinds of content that existed before them,” said Ben Colman, CEO and co-founder of AI detection firm Reality Defender, during a hearing in the Senate Judiciary Committee’s privacy and technology subpanel.

As generative artificial intelligence models — algorithms that use real world data to generate text, images or audio — have grown in size and complexity, lawmakers and experts alike have worried about how such technology could be harnessed to sway voters.

That threat was laid bare in the days leading up to the New Hampshire presidential primary, when some Democratic voters in the Granite State received an AI-powered robocall spoofing the voice of President Joe Biden. The robo-Biden urged voters not to participate in the New Hampshire primary, erroneously instructing them to save their votes for the general election in November.

The phone campaign, which is under criminal investigation, prompted the Federal Communications Commission to ban AI robocalls. A New Orleans-based magician told NBC News in February that he had been paid $150 to generate the fake message from the president using a third-party AI voice software.

Connecticut Senator Richard Blumenthal, chair of the Judiciary Committee’s technology panel, noted it's easier than ever to generate content aimed at sowing doubt in the democratic process.

“If a street magician can cause this much trouble, imagine what Vladimir Putin and China can do,” he said.

That sentiment was echoed by New Hampshire Secretary of State David Scanlan, invited to testify before the committee about his experience navigating an AI-powered election hoax. Scanlan contended that while it is hard to know the effect the Biden robocall had on turnout in his state’s primary, the issue would certainly be more dire on a national scale.

“These things, in a national election, are going to be generated nationally, whether it’s by foreign actors or some other malicious circumstance," he said. "I think we need uniformity and the power of federal government to help put the brakes on that.”

Colman warned the AI Biden robocall was a portent of things to come, forecasting that artificial intelligence could be used to interfere with elections on a “one to one” basis. For example, he said, voters could be targeted with AI models mimicking a loved one in crisis, keeping them away from the polls on election day. Election workers could similarly be targeted, Colman said, with spoofed phone calls instructing them to travel to the wrong polling locations.

It is also increasingly difficult to adequately detect and respond to deepfakes before they spread, Colman observed, pointing to AI-fueled disinformation in recent elections in Slovenia and Taiwan.

“By the time a deepfake widely spreads, any report calling it a deepfake is also too late,” he said. “Uncovering truth will always be slower and harder than spreading a lie.

“This is not fearmongering, AI alarmism, doomerism or conspiracy-minded hyperbole,” Colman continued. “It is simply the logical progression of the weaponization of deepfakes.”

Rijul Gupta, CEO of AI detection company DeepMedia, said that deepfake technology could also introduce new uncertainties about genuine election content. He pointed out that the prevalence of AI disinformation creates plausible deniability for people to reject real content as generated by an algorithm.

“When anything could be fake, you don’t know what’s real anymore,” Gupta said.

Lawmakers have for months been locked in a dead sprint to get out ahead of artificial intelligence technology, advancing a tranche of legislation aimed at clamping down on its use and development.

Blumenthal, alongside Missouri Senator Josh Hawley, have introduced a bill that would stand up an independent agency for regulating AI and would require companies building such technology to procure a federal license. Tennessee Senator Marsha Blackburn and Delaware Senator Chris Coons have backed a measure that would force companies to digitally watermark AI-generated materials.

Hawley has also sponsored legislation with Minnesota Senator Amy Klobuchar that would clamp down on sexually explicit deepfakes.

On Tuesday, the Missouri Republican implored Senate leadership on both sides of the aisle to bring some of this bipartisan AI legislation up for a vote.

“I think the dangers of this technology without guardrails and without safety features are becoming painfully apparent,” said Hawley, adding that he was concerned it would take an electoral crisis for lawmakers to get in gear on the issue.

“Let’s not allow these same companies that control social media in this country … to use AI to further their hammer hold on this country and the political process,” he said.

Meanwhile, experts invited to testify on Tuesday commended lawmakers for their efforts to rein in AI technology, but argued they could go even further.

Zohaib Ahmed, CEO and co-founder of California-based AI voice generator Resemble AI, said Congress should crack down on companies that provide voice spoofing services to the general public, who can create deepfakes without writing any code.

“I think we have to hold a lot of generative companies accountable,” he said.

Gupta told lawmakers that they should look beyond the AI countermeasures outlined in existing legislation.

“We can’t just say ‘license generative AI companies’ and leave it at that,” he argued. Congress should explore other methods for regulating deepfakes, such as using cryptographic hashing — a coding algorithm used to verify data.

Social media companies and other platforms that display uploaded user data could also use a method called AI poisoning, code which renders images useless when scraped by AI models, as a way to fight back against bad actors.

Blumenthal, meanwhile, put a fine point on the urgent need for AI regulations with a presidential election looming.

“Our democracy is facing a perfect storm,” he said. “When the American people can no longer recognize fact from fiction … it will be impossible to have a democracy.”

Follow @BenjaminSWeiss
Categories / Government, National, Politics, Technology

Subscribe to Closing Arguments

Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.

Loading...