(CN) – The European Commission on Tuesday released its first evaluation of the continent’s code of conduct to combat illegal online hate speech, and the results show social-media companies have a long way to go to address regulators’ concerns about cyberbullying, radicalization and fake news.
Earlier this year, technology giants Facebook, Twitter, Microsoft and Google agreed on the commission’s code of conduct as it applies to hate speech by users of their social-media platforms. EU law defines hate speech as expressions of racism or xenophobia meant to incite violence or hatred toward an individual or group because of their race, color, religion, descent or national or ethnic origin.
The code required the companies to be able to review content when it’s flagged by other users and the authorities, and to remove content deemed offensive within 24 hours.
But in its first evaluation of the code, the commission found the social-media networks have been slow to respond to flagged content. Only 28 percent of content flagged as illegal hate speech was removed during the survey period, and less than 40 percent of flags are reviewed within the required 24 hours.
“It is our duty to protect people in Europe from incitement to hatred and violence online. This is the common goal of the code of conduct,” justice commissioner Vera Jourova said in a statement. “The last weeks and months have shown that social-media companies need to live up to their important role and take up their share of responsibility when it comes to phenomena like online radicalization, illegal hate speech or fake news. While IT companies are moving in the right direction, the first results show that the IT companies will need to do more to make it a success.”
A second evaluation, also done for the commission by a group of nonprofit organizations throughout Europe, will be conducted in early 2017. The results will help determine future steps to combat racism, intolerance and xenophobia on social media, the commission said.