(CN) — It's Day Zero of the European Union's pioneering mission to make the internet safer for Europeans by forcing the world's biggest online platforms to remove harmful content such as disinformation and hate speech, and to stop targeting users with personalized advertising.
On Friday, new EU rules under the Digital Services Act kicked in that require 19 online giants — including Facebook, YouTube, Instagram, Snapchat, Google, Amazon, Twitter and TikTok — to take a series of steps to make their platforms and search engines safer for Europeans or face billions of dollars in fines.
The DSA, approved by EU institutions last year, is considered the first attempt in the democratic world to rein in the lucrative but deeply damaging practices by Big Tech to bombard users with personal ads and allow disinformation, hate speech, falsehoods and illegal content to mushroom across the internet.
“Europe is now effectively the first jurisdiction in the world where online platforms no longer benefit from a ‘free pass’ and set their own rules,” said Thierry Breton, an EU commissioner overseeing enforcement of the digital law, in comments to reporters this week.
“Technology has been ‘stress testing’ our society,” he said. “It was time to turn the tables and ensure that no online platform behaves as if it was ‘too big to care.’”
Still, it remains to be seen how effective the law can be in making the internet safer and curbing Big Tech's reliance on algorithms and business models that profit from collecting personal data.
Meanwhile, the rules already face legal challenges from Amazon and Zalando, an online German fashion retailer. Elon Musk, owner of the platform X, formerly Twitter, is expected to fight against the regulations because he sees them stifling freedom of expression.
In Europe, many hope the law will act as a much-needed antidote to the most sinister and poisonous side-effects of online life.
“We see it as an opportunity to fix at least the most harmful problems caused by online platforms right now,” said Dorota Głowacka, a lawyer with the Warsaw-based human rights group Panoptykon Foundation. “The way it works right now on most social media platforms — the biggest ones — is quite toxic.”
Głowacka called it a landmark piece of legislation, even though she feels it should have gone even farther in protecting users, and that it has the potential to bring about big changes.
“It all depends very much on how the regulation will be enforced," she said, "and that is still a big question."
The law compels platforms to make it easier for users to flag harmful and illegal content, which will be assessed by teams of experts hired by the EU and its 27 member states. When prompted to take action, tech platforms will have 24 hours to remove material deemed harmful. Also, a system for appealing such decisions needs to be in place.
Additionally, Big Tech firms must routinely submit assessments about the risks posed by their platforms and work to lessen those dangers. Outside auditors will examine the companies' efforts to tackle harmful content.
To help it police the internet, the EU created a center where 30 experts will analyze whether algorithms used by the Big Tech firms to moderate content, and to propose information to users, are in line with the new internet safety law.
In another big change, the regulations make it illegal for tech platforms to use deeply personal data they collect from users — such as one's sexual preference, health status, religion, political affiliation — in order to target them with personalized ads.
“This prohibition is a win. It's the first-ever regulation that we have that basically bans certain types of data being used for advertising purposes,” Głowacka said in a telephone interview.