WASHINGTON (CN) – As social media giants open up about the exploitation of their platforms by Russian agents trying to interfere in the 2016 U.S. election, new research is compounding criticism of businesses like Facebook and Twitter by lawmakers.
“These are the most sophisticated, adaptive, innovative companies in the world who are basically trying to make us believe they have no idea how to fix any of this, which is just garbage,” said Molly McKew, a political consultant who specializes in information warfare and U.S.-Russian relations.
Several members of Congress made similar proclamations last week at a Senate subcommittee hearing on crime and terrorism inspired by recent revelations about the foreign-generated content that flooded Facebook, Google and Twitter last fall.
While Facebook reported that up to 126 million users scrolled through content circulated by Russian operatives, Google found just over 1,000 videos – 43 hours of content – connected to Russian influencers, and Twitter revealed that 2,752 profiles were controlled by the Russian firm Internet Research Agency. Twitter also counted 1.4 million propaganda-based tweets during the 2016 election from at least 36,000 automated “bots.”
At Tuesday’s hearing, members of Congress voiced frustration at how long it took for the companies to disclose these statistics.
Oxford University’s Project on Computational Propaganda meanwhile says the problem is much more widespread than the companies have let on. On Sept. 28 the institute reported that the presidential election saw disinformation and factual content circulating on social-media platforms at a 1:1 ratio.
Oxford researcher Lisa-Maria Neudert called it unsurprising in an interview that Facebook had to revise an earlier estimate that just 10 million users saw Russian-generated content.
“What does surprise me, in light of the significant scale, is how slowly investigations are proceeding and how difficult it seems to be to produce data evidence of Russian election hacking,” Neudert said in an interview.
When asked whether Facebook could be manipulated to steer the election last year, CEO Mark Zuckerberg famously dismissed the notion as “crazy.” The company even reported as recently as summer 2017 that it had found no evidence of election meddling.
“I’m surprised how the political potential could have gone unnoticed,” Neudert said.
McKew has questions as well about why the companies are characterizing the threat of foreign interference as a new challenge.
As adviser to the president of Georgia from 2009 to 2012, McKew helped the country sandwiched between Russia and Turkey to counter a campaign of disinformation on the heels of a five-day war sparked by Russia’s invasion of South Ossetia in 2008.
“There’s a lot that they could do to document coordination, to document origin, to document everything else,” McKew said of the tech companies in a phone interview. “But certainly it seems like they’re pretending that’s not possible.”
McKew noted that social media companies rely on algorithms for the express purpose of keeping users engaged so they can promote content and drive advertising.
“They knew exactly what they were doing because they were selling this to people,” she said.
Facebook has not responded to a request for an interview. Google meanwhile declined to comment beyond its Oct. 30 blog post about its commitment to stopping the abuse of its platform, and Twitter declined to comment beyond what the company’s general counsel Sean Edgett said during congressional testimony last week.
Neudert and McKew say standard privacy policies may make it difficult to track down Russia-driven content in cases where the Facebook or Twitter users deleted their pages or posts.
While commendable from a data-protection standpoint, Neudert noted that this is a loophole bad actors can exploit to hide evidence.
“It makes it really easy for Russia and other external states and political actors that might want to intervene to just keep on doing what they’re doing because this way they’re not going to get caught,” she said.
The heads of National Intelligence and Homeland Security first revealed to the American people in October 2016 that the Russian government had directed the hack of the Democratic National Committee to interfere with the U.S. election process.
By that point, however, the FBI was already three months deep into its investigation of Russian interference and possible coordination with campaign officials for then-candidate Donald Trump.
The role that social media companies may have played is a new offshoot of that probe.
McKew said the companies have done nothing since Russian meddling came to light to prevent bad actors from using targeted advertisements and promotional content. That sends a message that it’s “open season on information operations.”
“I think there will be a lot of new actors trying to figure out how to use weaponized information in the ways that the Russians have done,” she said.
Aric Toler, a lead digital researcher and analyst with online investigative and verification group Bellingcat, said he is skeptical of the power of disinformation though he agrees that the tool is a useful one for the Kremlin to damage public discourse.
Pointing to the Twitter user @PeeOnHillary, a Russian troll account that invited its followers to send photos or videos doing as its name suggests with pictures of Trump’s campaign opponent, Toler said America has bigger problems on its hands if such efforts can fundamentally rupture democracy.
“If you list out the reasons that Trump won, Russian bots and disinformation would be in the triple-digits for most important,” he said in an email. “The crippling damage done to our democracy from gerrymandering and voter suppression is exponentially greater than even the most successful Russian disinformation campaign.”
Toler said he is also unsure whether social media companies have a responsibility to do more to combat the problem of disinformation.
“The goal of Twitter and Facebook, first and foremost, is to increase their share prices – full stop,” he said. “Everything else – ‘protecting our democracy, stopping disinformation,’ etc. – is far less important unless it brings the risk of greater government control and regulation, as they are now facing.”
Lawmakers appeared hesitant this week to take any steps to regulate ad content on the platforms, afraid of wading into thorny First Amendment issues. The attorneys who testified meanwhile signaled they would base internal guidance on proposed legislation, and promised to increase oversight.
Toler recommended a few obvious steps, such as leaving articles from conspiratorial outlets like Breitbart and Infowars out of its trending-news categories.
But there is a gray area, Toler said, with ideological news that misrepresents rather than outright fabricates facts.
“I have no idea how to deal with that, and I haven’t seen a lot of promising solutions,” he said. “I don’t think that you can regulate that really, as it’s not at all a new problem.”
Toler said improving the quality of “real” news is likely the best strategy to defeat so-called fake news, but he expressed caution about government regulation of social media.
“Disinformation is scary,” he said. “But the U.S. government led by Trump being able to dictate what is and isn’t disinformation to these gigantic platforms is even scarier.”
For Oxford researcher Neudert, the issue has more urgency. Her research with the Project on Computational Propaganda found that the swing states that helped Trump prevail were flooded in the days before the election with polarizing content and fake news from Russian sources and WikiLeaks.
On Twitter specifically, disinformation exceeded the national average in 27 states, 12 of which were swing states, including Michigan, Florida and Pennsylvania where Trump eked out narrow victories.
“That just makes it more obvious and more transparent that propaganda and misinformation circulating during the US elections was not merely organic,” Neudert said, “but rather was the result of targeting and some sort of external forces.”
Pointing to a statistic from Pew Research that found 68 percent of Americans get their news from social media, Neudert said the platforms have created a new public sphere that shapes how and what people think about.
“If disinformation, bots and algorithms work together to manufacture a consensus, that citizens then jump on, I believe it is dangerous for any democracy and ultimately can completely manipulate the climate of opinion,” she said.