Wednesday, December 7, 2022 | Back issues
Courthouse News Service Courthouse News Service

Supreme Court takes up challenge to protections for tech companies  

The high court will decide whether social media platforms can be sued when their algorithms recommend terrorist content.

(CN) — The U.S. Supreme Court agreed on Monday to hear a case later this term challenging a law giving tech companies sweeping legal immunity from lawsuits over their user-generated content, setting the high court up for a decision that could broadly impact the future of social media.

The question before the court hinges on whether Google and other tech platforms are protected under Section 230 of the 1996 Communications Decency Act even when their algorithms recommend content from third parties.

Originally enacted to give online platforms the ability to moderate content on their websites while generally exempting them from legal liability for posts made by users, the law has come under fire in recent years from lawmakers of both major parties who claim Big Tech companies have abused the protection.

The justices granted certiorari in a case brought against Google by the family of Nohemi Gonzalez, a 23-year-old U.S. citizen who in 2015 was one of 129 people killed in a series of linked Islamic State terrorist attacks in Paris. The lawsuit alleged that Google, which owns YouTube, was partially responsible because it had allowed the terrorist group to post radicalizing videos and recommended those videos to other users.

The Gonzalez family has argued that through YouTube, Google “provided material assistance to” the Islamic State in violation of the Anti-Terrorism Act. They alleged that YouTube’s recommendations were “critical to the growth and activity” of the terrorist group.

“Google selected the users to whom it would recommend ISIS videos based on what Google knew about each of the millions of YouTube viewers, targeting users whose characteristics indicated that they would be interested in ISIS videos,” the plaintiffs’ petition for certiorari says.

Gonzalez’s family members argue that Google recommended ISIS videos to users who were “likely to be susceptible to the calls for terrorism which those videos conveyed” even as ISIS recruits were in combat with U.S. forces and “attempting mass killings” in the U.S. and Europe.

In a brief opposing certiorari, attorneys for Google argued that the only link between the terrorists and YouTube was that one of Gonzalez’s attackers was a YouTube user who had once appeared in an ISIS propaganda video.

“The complaint does not allege that any terrorists saw such a recommendation or that such recommendations had any connection to the Paris attack. Nor does the complaint explain which of YouTube’s user-input-driven features petitioners challenge or why,” the brief says.

A California federal judge previously granted Google’s motion to dismiss the action, finding that Section 230 barred the claims because the videos were produced by ISIS, not by Google itself. The Ninth Circuit upheld the dismissal, ruling that ISIS, not YouTube, was the information content provider that created the videos.

The Supreme Court also agreed Monday to hear a similar case against Twitter, Facebook and YouTube over extremist content published on their platforms.

Although the high court does not appear to have tested Section 230 before, scrutiny of the provision has arisen in previous opinions.

Conservative Justice Clarence Thomas wrote in an October 2020 opinion that the law should be reined in, saying that internet platforms have received “sweeping protection” from courts which are reading Section 230 more broadly than Congress intended.

Thomas warned that “extending [Section 230] immunity beyond the natural reading of the text can have serious consequences.”

Members of Congress in recent years have repeatedly tried to reform or repeal the law, including recently via the Justice Against Malicious Algorithms Act and the EARN IT Act, which passed the Senate Judiciary Committee unanimously in February and aims to hold tech companies responsible for posts exploiting children.

Industry groups and civil liberties advocates have opposed many of the proposed laws, warning that they could have a chilling effect on freedom of expression online and undermine consumer privacy.

A representative for Google did not respond to a request for comment.

Read the Top 8

Sign up for the Top 8, a roundup of the day's top stories delivered directly to your inbox Monday through Friday.

Loading
Loading...