Thursday, February 2, 2023 | Back issues
Courthouse News Service Courthouse News Service

Tech Giants Grilled Over Rise of White Nationalism

Reports of hate crimes and online messaging promoting white nationalism are on the rise and executives from Facebook and Google faced tough questions Tuesday from lawmakers about efforts to stop the spread of hate on their platforms.

WASHINGTON (CN) – Reports of hate crimes and online messaging promoting white nationalism are on the rise and executives from Facebook and Google faced tough questions Tuesday from lawmakers about efforts to stop the spread of hate on their platforms.

The House Judiciary Committee hearing was spurred by the killing of 50 people at the Al Noor mosque in Christchurch, New Zealand last month. The shooter used Facebook to livestream the attack and in a manifesto published online, expressed white nationalism as the cornerstone of his belief system.

The mass shooting prompted Facebook to extend a content ban on its website to include white nationalists and white separatists. At Tuesday’s hearing, Neil Potts, Facebook’s public policy director, told lawmakers the social media giant is “working hard” to combat a global network of bots that promote hateful messaging.

“There is no place for terrorism or hate on Facebook. We remove content that bullies, threatens or harasses others. We have invested heavily in safety and security in the last few years,” Potts said.

Some 30,000 people are employed by Facebook to combat hateful rhetoric or messaging that promotes terrorism, but Potts acknowledged the threat is not contained to “inauthentic” activity by bots alone: it also stems from the online activity of real people.

Over 200 Facebook pages with ties to white supremacist organizations have been banned so far and more will be removed soon, Potts said.

Alexandria Walden, who handles free speech and civil rights concerns for Google, told lawmakers videos or images found in violation of the company’s terms of service are removed and assigned a digital fingerprint, therefore making it impossible for the same video to be uploaded again and again.

“Redirect methods” which use targeted ads and videos to disrupt online hate speech are also being used by Google, Walden said. The company also has a slew of hate crime and terrorism experts who review emerging trends online and flag content before it can be widely disseminated.

While hateful rhetoric spreads like wildfire online, civil rights experts testifying Tuesday also urged the committee to take offline inflammatory rhetoric more seriously in light of racially charged violence unfolding across the country.

“The rise of the alt-right and polarizing rhetoric made by political candidates and leaders has enabled the spread of hate,” said Eileen Hershenov, senior vice president for policy at the Anti-Defamation League.

According to the league, 78 percent of all extremist murders in 2018 were committed by white supremacists. In the last decade, white supremacists have been responsible for more than half of all extremist murders.

“The federal government must focus on domestic terrorism, encourage state and local law enforcement to collect and report hate crimes data to the FBI, strengthen laws against perpetrators of online hate and improve law enforcement training to combat hate online and off,” Hershenov said.

White supremacists are not “hiding under their hoods” anymore, said Kristen Clarke, president of the National Lawyers’ Committee for Civil Rights Under Law. She said new groups are actively and often rebranding themselves to make their message more palatable and to snake their way into the mainstream.

“The alt right, neo-Nazis, the Proud Boys, they all pose the same threat today as the KKK did,” Clarke said.

Aggressive lawyering by agencies like the Department of Justice could help end the facilitation of white supremacist activity, she said, adding that taking steps such as penalizing banks, tech companies or web hosts that provide a platform to prop up white supremacists should be considered in earnest.

Comprehension of what makes a white supremacist ideology must also be uniform at the department, according to Clarke.

In 2017, the FBI issued an intelligence assessment report designating “black identity extremists” as a threat to law enforcement officers. The report was circulated to over 18,000 law enforcement agencies and was met with heavy criticism by civil and human rights groups who claimed it was only created to further discriminate against black people and potentially spark harassment of black activists.

“There is no such thing as ‘black identity extremism,’” Clarke said Tuesday. “This is a way to take the public and government’s attention away from the real threat of white nationalism.”

Clarke’s position, as well as that of several other civil rights and Democratic lawmakers, was met with frequent and sharp rebuke by conservative commentator Candace Owens of the right-wing group Turning Point USA.

Owens and Mort Klein, president of the pro-Israel Zionist Organization of America, were invited by members of the committee’s Republican minority to testify.

Claiming Democrats manipulate data points in hate crime studies to “widen the definition of a hate crime and control the narrative,” Owens told lawmakers the words “white nationalism” may have “once held real meaning” but today the phrase is often used by Democrats and journalists to further their preferred political candidates and win elections.

“This hearing today isn’t about white nationalism. It is a preview of a Democrat 2020 election strategy and it is the same as the 2016 election strategy,” Owens said.

Hershenov, of the Anti-Defamation League, countered that Owens’ claim that hate crime data is manipulated by Democrats is patently false. She said statistics reported by organizations like the ADL are driven by findings from the FBI and various law enforcement agencies.

Specific solutions to combat hate crimes and white nationalism were not introduced or hashed out by lawmakers during Tuesday’s often tense hearing.

But as online posts promoting racist and anti-Semitic ideals began bombarding the live stream of the hearing, YouTube acted swiftly.

“Hate speech has no place on YouTube,” the company tweeted Tuesday morning. “We’ve invested heavily in teams and technology dedicated to removing hateful comments/videos. Due to the presence of hateful comments, we disabled comments on the livestream of today’s House Judiciary Committee hearing.”

Read the Top 8

Sign up for the Top 8, a roundup of the day's top stories delivered directly to your inbox Monday through Friday.