(CN) — One major obstacle is preventing social media companies from effectively combating the proliferation of disinformation and hate speech online, two data ethics experts suggested during a virtual forum Wednesday.
That obstacle is a business model that relies heavily on technology to solve problems and minimizes the need for costly human labor, according to Rachel Thomas, director of the Center for Applied Data Ethics at the University of San Francisco, and Sarah Roberts, professor of information studies at the University of California, Los Angeles.
The two professors discussed the challenge social media companies face in fighting to eliminate fake news, hate speech, terrorism and other objectionable content from their platforms in a virtual forum sponsored by the Center for Data Innovation, a multinational think tank focused on data, technology and public policy.
Social media giants like Facebook, Twitter and Google’s YouTube have deployed an army of algorithms to identify and remove objectionable content. Algorithms can identify patterns based on human-written codes and machine learning. They might compare the digital elements of video, audio or text to previously flagged content or focus on the user who posted that content, asking questions such as “Is this a new user?” or “Does this user have a profile picture?”
But these algorithms have limits. They cannot contextualize nuanced language or identify every piece of content that violates a social media firm’s standards. That’s why human content screeners are essential.
“Algorithms don’t take the piece of content, look at it, and use the cognitive process that is pretty much uniquely human,” Roberts explained.
Facebook employs thousands of content moderators, most of them contractors who earn far less than regular Facebook employees. But to effectively screen the millions of pieces of content posted daily, it would need to exponentially expand its workforce. And that would eat into the company’s profits.
“To have enough content moderators to adequately or more effectively moderate, they would not be nearly as profitable,” Thomas said.
Social media companies have been under increasing pressure to stop fake news from spreading on their platforms, especially after revelations that Russia used social media to meddle in the 2016 U.S. presidential election with fake accounts and disinformation campaigns.
Recent high-profile cases of police violence against people of color have also renewed calls for the technology firms to do more to combat hate speech and racist content. After a meeting with Facebook CEO Mark Zuckerberg Tuesday, civil rights leaders blasted the company for refusing to change how it handles posts that promote hate and encourage violence.
Civil rights groups have convinced companies, including Unilver and Best Buy, to stop advertising on Facebook until it changes its policies.
In defending his company’s decision not to take action against President Donald Trump’s misleading comments about mail-in voting in May, Zuckerberg said Facebook “shouldn’t be the arbiter of truth of everything that people say online.”
Roberts says Zuckerberg fails to recognize that his $686 billion company is already “the arbiter of something” because it uses a massive system of computer algorithms and human beings to screen content.
“He suggests that Facebook is a place for free expression,” Roberts said. “We know that’s not true. If some things can be rescinded on some grounds, why can’t other things that might be very dangerous, such as ‘Don’t wear masks. Coronavirus is a hoax?’"
Thomas and Roberts say social media companies have shown they can marshal resources to tackle a problem when it affects their bottom line. The companies have effectively deployed algorithms to instantly identify and remove copyrighted material, without regard as to whether such material might fall within the “fair use” exception for using copyrighted material.
“That’s an example of where the economic priorities have driven the companies,” Thomas said.
Another problem with moderating content on a global scale is that social media companies employ far more English-speaking screeners than those who speak other languages. In 2013, Facebook started receiving warnings that its platform was being used to promote hatred against Muslims in Myanmar.
Two years later, the company had only four contractors who spoke Burmese, the native language of Myanmar. The United Nations found in 2018 that Facebook’s platform played a major role in promoting the genocide that killed thousands of Rohingya Muslims in Myanmar and displaced more than 700,000 people.
Another challenge is screening live content. In March 2019, a white supremacist broadcast nearly 17 minutes of a horrific mass shooting that killed 51 Muslim worshippers at two mosques in Christchurch, New Zealand.
Social media companies could never employ enough people to screen every livestream as it happens, Roberts said.
To tackle the problem, Roberts suggested platforms could make users earn the right to livestream after proving they are a vetted user with no strikes against them. However, she acknowledged that could pose problems for people trying to post content to expose human rights abuses in dangerous parts of the world. Still, she said, automatically giving everyone the right to broadcast video to an audience is a novel concept.
Roberts joked that she often likes to ask colleagues who work in television if they would be okay with letting anyone come in off the street and get in front of a camera for a live broadcast.
“They look at me in horror because it’s a terrible idea,” she said.
Thomas complained that social media firms’ refusal to share details on how their algorithms work has resulted in some users, including queer and black content creators, getting their accounts restricted because they posted content about homophobia, sex or racism that an algorithm improperly identified as objectionable.
The platforms defend this secrecy by arguing that revealing more details would give bad actors a road map on how to game the system.
Roberts said the bad actors have already been gaming the system without those details from the social media firms. Regular users and content creators are the ones who suffer as a result, she said.
“They’ve made it their practice to do those things totally at their discretion,” Roberts said. “Who does that serve? It serves the platforms and their advertising partners. Their clients are not users.”Follow @NicholasIovino
Subscribe to Closing Arguments
Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.