Updates to our Terms of Use

We are updating our Terms of Use. Please carefully review the updated Terms before proceeding to our website.

Tuesday, May 7, 2024 | Back issues
Courthouse News Service Courthouse News Service

Social media giant layoffs signal opportunity for online misinformation, bad faith attacks

Experts say major workforce cuts at the social media companies could drive increasing opportunities for misinformation and impersonation of officials online.

SAN FRANCISCO (CN) — Experts say as major social media platforms from Twitter to Alphabet slash their workforces and lay off human content moderators, the likelihood of increased online misinformation and a loss of accuracy is high, on the heels of another tense U.S. election. 

At Twitter, Elon Musk has booted half of full-time staff and an untold number of contractors responsible for content moderation, and more than 1,000 more employees may have quit since being told to prepare for “Twitter 2.0” with longer work hours or else get fired with three months’ severance pay. Twitter’s image as well as Musk’s other properties like Tesla face financial harm from his decisions to fundamentally change the website business model of using “verified” blue check marks by the usernames of verified individuals. When Musk announced that a Twitter subscription would allow anyone paying $8 to add a "blue check mark" to their verified accounts, a large number of fake accounts popped up. 

Such drastic changes to a platform concern experts also watching Meta laying off 11,000 — about 13% of its workforce — where content moderation has been a major issue since the Trump administration and proliferation of election disinformation before and since the Jan. 6 U.S. Capitol insurrection. Snap, the owner of Snapchat, also recently laid off 1,000 workers. And Google’s parent Alphabet Inc. is rumored to be preparing to lay off 10,000 according to Business Insider, as Google suspended hiring and reportedly told some employees to "shape up or ship out" if expectations are not met. 

Public information officers who operate government accounts online are cautiously waiting out turmoil and urging the public to verify it really is their accounts appearing on timelines, according to the Associated Press.

Stephen Farnsworth, a political science professor at University of Mary Washington, said government officials will have to work harder to prove they are authority figures. Otherwise, social media websites will lose credibility.

And while private social media companies can set their moderation policies, he said there are clear repercussions when companies allow hate speech or reinstate accounts previously banned for their actions. As staff cuts come down on these platforms, people who want to spread hate speech have more room to “flood the zone” with their messages.

“The word is out on far right platforms and elsewhere that content moderation is going to be loosened, and the volume of hate speech has increased as a result of a looser control,” he said. 

Jane Kirtley, law professor at University of Minnesota Law School, said social media utility is driven by users and ruining the experience destroys that utility and value for everyone.

“Without that additional reason to exist as a company, you're really left with something that looks like it has no moral center,” Kirtley said. “It seems to me they (CEOs) do have a moral obligation to try to separate the wheat from the chaff and ensure the info out there is factual and truthful, verifiable. But that takes money, resources and people, not just AI. I don't think they are prepared to invest the money in that.”

Chimène Keitner, international law professor at UC Law San Francisco, said although Section 230 of the Communications Decency Act continues to shield platforms from legal liability for certain decisions — because it stipulates that they are not publishers — labor laws and other legal obligations still bind social media in the United States and in other countries. 

“That said, these regulations fall far short of compelling platforms to promote, rather than undermine, public safety and well-being,” Keitner said. 

“It's about the vulnerability of our information ecosystem (and other core societal institutions) to distortion — by the unrealistic expectations and demands of investors, and by the whims of individuals who lack the self-awareness and sense of social responsibility to behave thoughtfully, ethically, and with due regard for others.”

Keitner added that to work properly, the state’s regulatory process must force companies to “internalize the negative social costs of their activities.”

“The law hasn't yet figured out how to create sufficient incentives for socially responsible content moderation in the United States, let alone in other countries with different political systems,” she added. “Just throwing more money or more people at the problem will not always produce a better result, but eliminating core teams without a thoughtful, carefully developed alternative strategy is bound to create a lot more harm in the world that could and should have been avoided.”

Coye Cheshire, an information professor at UC Berkeley, said increased reliance on automated moderation systems over dedicated human moderators is a logical shift as platforms grow exponentially. But what is lost in the process is how groups that manage oversight can properly and efficiently do so, as major staff reductions complicate developing “more restorative approaches to resolve online harms between people,” he said. 

“Some special disinformation problems that are unique to video and image content, such as deep fakes, are increasingly difficult to detect without sophisticated software tools and human verification processes,” Cheshire said. 

“While I do think the same types of moderation problems exist across various platforms, some platforms are simply more likely to have more problems of a particular type as a function of their primary function and design. Some of the most dangerous, vile and hurtful content is inherently contextual, and resolving problems with such content requires human conversation and understanding.”

Cheshire said Twitter’s case represents a “fascinating moment” for social media to confront questions about governance that different people will accept when using a social media product.  

“To build sustainable trust in social platforms and their governance, we cannot solely be concerned with what technology is created, we must focus on how those technologies and solutions are created,” he added.

There is also anxiety about whether alternatives to the current mainstream social media, like HiveSocial and Mastodon, are viable. Media experts at Nieman Lab said an open-server platform with no central moderator is a much different experience as each user chooses a community, and that community’s rules to play by. Journalists could benefit from this more open style of communication, as could the broader online discourse ecosystem, the writers argued. 

However, Farnsworth said because social media companies have been able to offer places for democratization and organizing, and he said he does not think “bottom up movements” will stop using the major platforms.

Kirtley said she would advise people to “Reconsider using Twitter and other social media platforms unless you are convinced that they are transparent about their moderation and verification processes (if any) and that they have adequate technical staffing to protect the platform and user from hackers, who potentially can alter tweets and access personal information.”

Follow @nhanson_reports
Categories / National, Technology

Subscribe to Closing Arguments

Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.

Loading...