WASHINGTON (CN) – Facing pressure from lawmakers to curtail extremist content, officials from Google, Twitter and Facebook told senators on Wednesday their platforms are leaning on improved technology to flag, remove and report posts before they lead to violence.
Large technology companies have faced increasing pressure after a recent spate of mass shootings by people who spread and consumed white supremacist material online. This includes shooters in Christchurch, New Zealand, Poway, Calif., and El Paso, Texas, who shared white supremacist manifestos shortly before their massacres. The Christchurch shooter also livestreamed his spree and the video remained on Facebook for an hour before it was taken down.
Online forums such as 8chan have been a common link between ideologically driven shooters, bringing into the public eye what major tech companies are doing to prevent white supremacists and other extremists from using their platforms to spread their ideologies.
“White supremacist terrorists in the United States don’t have training camps in the same way that foreign terrorist groups do, like ISIS or al-Qaeda or others,” George Selim, senior vice president of programs at the Anti-Defamation League, told the Senate Commerce Committee on Wednesday. “Their training camp where they connect, learn and coordinate with one another is in the online space.”
This online link between shooters has also drawn major tech companies into the focus of lawmakers, who grilled company officials for two hours Wednesday on what they are doing to prevent extremist violence from migrating off of their web pages.
The officials primarily focused on their technological responses to the issue, saying they are continuing to develop machine learning and artificial intelligence systems to flag extreme content for removal and sharing with law enforcement agencies.
Derek Slater, global director of information policy at Google, told the Commerce Committee on Wednesday that 87% of the 9 million videos Google removed in the second quarter of this year were flagged by machines rather than people, most before anyone viewed them.
“Why one should be optimistic is that those systems ideally will continue to get better,” Slater said. “Will they be perfect? No, bad actors will continue to evolve. But I do think there is reason for optimism and I think there is reason for optimism based on the collaboration of all of us today. ”
Other companies reported similar numbers and Monika Bickert, head of global policy management at Facebook, touted the cooperation of major tech firms to make sure smaller companies have access to tools the giants have developed.
However, the officials acknowledged the limits of artificial intelligence tools, saying they must be improved and supported by actual people.
In the instance of the Christchurch shooter’s video, Bickert said it remained up for so long because Facebook’s artificial intelligence did not recognize violence in the video. She said the company is now trying to gather videos from law enforcement agencies to train the system to better recognize violence in content going forward.
Lawmakers seemed generally satisfied with these ideas, though some were frustrated with instances in which the companies failed to recognize signs of danger until it was too late.
Senator Rick Scott, R-Fla., particularly pointed to the shooter at Marjory Stoneman Douglas High School in Parkland, Fla., who once commented on a YouTube video that he planned on becoming a “professional school shooter.”
“Are you comfortable that if another Nikolas Cruz put something up, you would contact somebody and there would be a follow-up process?” Scott asked Slater, referring to the shooter.