Updates to our Terms of Use

We are updating our Terms of Use. Please carefully review the updated Terms before proceeding to our website.

Friday, April 19, 2024 | Back issues
Courthouse News Service Courthouse News Service

Tackling the ‘Deep Fake,’ House Grasps for Solution to Doctored Videos

Somewhere between fact and fiction, seeing and believing, satire and solemnity, a technology driven by artificial intelligence is on the rise, posing a threat to national security and civil rights that regulators of yesteryear could scarcely imagine.

WASHINGTON (CN) – Somewhere between fact and fiction, seeing and believing, satire and solemnity, a technology driven by artificial intelligence is on the rise, posing a threat to national security and civil rights that regulators of yesteryear could scarcely imagine.

A panel of experts delivered the frightening assessment as the House Intelligence Committee met Thursday to grapple with a phenomenon  that has become known as the “deep fake.”

On the eve of a massive merger, a deep-fake video could be used to convince stakeholders that a CEO has declared the company insolvent.

Another could involve superimposing a reporter’s face in a pornographic film. Still others could make a lawmaker appear to take a bribe or insinuate drunkenness.

Experts said Thursday that some of these scenarios have already happened while others – and far worse ones, at that - are yet to come.

When President Donald Trump retweeted a video in May to mock House Speaker Nancy Pelosi, it was not a deep fake but rather a doctored clip.

It was relatively easy for forensic analysts to determine that the content had merely been sped up or slowed down to make Pelosi appear confused. Unlike what happened to journalist Rana Ayyub, no face-grafting occurred.

Sharing Ayyub’s story with the committee, University of Maryland law professor Danielle Citron noted that the journalist awoke to find her  face had been superimposed on the body of a woman in porno in April 2018, less than 24 hours after an appearance on BBC and Al Jazeera in which she condemned Indian religious leaders who had advocated on behalf of defendants involved in an 8-year-old’s gang rape and murder.

As death and rape threats against Ayyub rolled in, Citron noted that those antagonizing the journalist also disseminated her home address in messages that suggested she was available for sex.

“She went offline for several months, she couldn’t work,” Citron said. “She lost her safety. The harm is profound and it will increasingly be felt by women and minorities.”

Citron has studied the intersection of cyber hate crimes, liability and deep-fake technology closely. Though she said the very notion of the technique is “scary,” Citron emphasized that lawmakers can solve it with a tweak to Section 230 of the Communications Decency Act.

“Section 230 says no online server should be treated as a speaker or publisher for user content,” Citron said. “We can change that to say no online service that engaged in reasonable moderator practices shall be treated as a speaker or publisher for another user’s content.”

Representative Devin Nunes, the ranking Republican on the committee, questioned whether “reasonableness” could even be defined, but Citron argued this is what the law does best.

“Every time a lawyer can’t determine reasonableness, it’s called tort law,” she said. “Negligence is built on a foundation of reasonableness. Law moves in a pendulum, we start with no liability, sometimes we overreact and apply strict liability … but content moderation has been going on for 10 years and there are meaningful practices emerging.”

Deep fakes also create something Citron and Bobby Chesney with the University of Texas Law School dub the “liar’s dividend.”

When people cannot believe what they see or hear because deep fakes are pervasive, a wrongdoer can seize on a genuine recording of their mischief and deny it is them, claiming instead it is a deep fake.

It could create a world where nothing is believable, she warned.

To stop that, Citron said people must be “robustly educated” about the technology.”

David Doerman, formerly of the Defense Advanced Research Project Agency, suggested applying a 7- or 15-second delay to video posted online to avoid the wildfire spread of potentially fake content.

“A lie can go half way around the world before the truth gets its shoes on,” said Doerman.

Looking toward the 2020 election, Clint Watts, a senior fellow at the Foreign Policy Research Institute, encouraged lawmakers to lean on social media giants to create unified standards right away.

“Right now, if you go to any platform that allows you to post anything from an inauthentic account, that’s the weak point at the start,” Watts said. "It can spread through the system rapidly. This issue cannot be policed this way.”

Categories / Civil Rights, Government, Media, Technology

Subscribe to Closing Arguments

Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.

Loading...