WASHINGTON (CN) – Officials from Google, Twitter and Facebook testified Wednesday before a House committee on strategies for countering misinformation and terrorist propaganda on their platforms, amid President Donald Trump claiming they are trying to rig the 2020 election.
The hearing in the House Committee on Homeland Security took place as Trump accused Google and Twitter of being “totally biased” in a Fox interview, adding that the federal government may take the tech giants to court. The White House also set a Social Media Summit for July 11 but did not specify which companies will participate.
Derek Slater, global director of information policy for Google, said he had nothing to say about the president’s remarks after the hearing.
A Twitter spokesman said in a statement that a decrease in followers on accounts including the president’s are the result of the company wiping fake users from its platform and not a targeting of conservative users.
“The result is higher confidence that the followers they have are real, engaged people,” the spokesman said.
Members on both sides of the aisle opened the hearing by drilling the companies on their response to the attack by a white supremacist on a mosque in New Zealand that left 51 dead and 49 wounded.
But the committee’s focus quickly shifted on party lines to the 2020 election, including the threat of content manipulation and attacks by foreign adversaries.
Republicans also raised alarm over a video of a Google executive they claim is evidence of intentions among the company’s leadership to disadvantage Trump in next year’s presidential race.
Slater, on behalf of Google, assured ranking member Mike Rogers, R-Ala., that no employee, even at the highest level, can manipulate the search engine.
In the video released by conservative group Project Veritas, Google executive Jen Gennai says calls from Senator Elizabeth Warren to break up Google will “not make it better, it will make it worse, because now all these smaller companies who don’t have the same resources that we do will be charged with preventing the next Trump situation.”
Rogers recognized Gennai was taped without her knowledge but said he was still wary of a “pervasive nature” within the company to manipulate information to support a particular political candidate.
Drawing heat from Democrats on the spread of a “deep-fake” video of House Speaker Nancy Pelosi last month, Facebook and Twitter officials explained the doctored video did not violate their policies but they worked to respond to it.
“Twenty-four hours is not fast enough. So are we playing here defense or offense?” said Representative Lou Correa, D-Calif., referring to what he said was a slow response time on the part of Facebook.
Google removed the Pelosi deep-fake from their platform YouTube under its “deceptive practices policy.”
Monika Bickert, head of global policy management at Facebook, said her platform works with 45 independent fact-checking organizations around the world to identify false content.
“As soon as we find something that those fact-checking organizations rate false on our platform, we dramatically reduce the distribution, and we put next to it related articles so that anybody who shares that gets a warning that this has been rated false,” Bickert said.
But Representative Yvette Clarke, D-N.Y., expressed doubt over the effectiveness of current warnings to users.
“There needs to be some sort of a universal way that Americans can detect immediately that what they are seeing is altered in some form or fashion,” said Clarke, who has proposed legislation to require social media platforms to include digital watermarks on deep-fake videos.
Bickert also assured Congress that Facebook – the site used most heavily by Russian hackers in 2016 – is in a much better position three years later to ensure “unprecedented levels of transparency” with political ads. Facebook now requires users to verify they are a U.S. citizen to run such an ad.
“Because we have seen fake IDs uploaded from advertisers, we send you something through the mail. You actually then get a code and you upload for us a government ID,” Bickert said. “Then we also put a paid-for disclaimer on the political ad.”
The three major platforms have responded to pressure to clamp down on fake news and terrorist activity by forming the Global Internet Forum to Counter Terrorism, which allows for real time collaboration across companies, as in the event of the Christchurch shooting.
“We were able to stop hundreds of versions of the video of the attack despite the fact that bad actors were actively trying to edit the video to circumvent our systems,” Bickert said.
The dissemination of the Christchurch shooter’s manifesto and livestream video marked a shift in how extremists utilize social media, said Nick Pickles, Twitter’s global senior strategist for public policy.
“The distribution of media was manifestly different from how [the Islamic State] or other terror organizations worked. This reflects a change in the wider threat environment that requires a renewed approach and a focus on crisis response,” he said.
Twitter has taken action against 184 violent extremist groups globally and permanently suspended 2,000 unique accounts, Pickles added.
But Nadine Strossen, a New York Law School professor, warned against employing this strategy as a foolproof preventative method.
“If someone is driven off one of these platforms they will then take refuge in darker corners of the web, where it is much harder to engage with them, to use them as sources of information for law enforcement and counter terrorism investigations,” Strossen said.