(CN) – As the United States continues to grapple with fake news disseminated during the lead-up to the 2016 presidential election, a new study urges journalists, academics and tech companies to examine and combat such misinformation campaigns.
In the report, published Thursday in the journal Science, researchers argue that a coordinated investigation into the underlying psychological, social and technological forces behind fake news is necessary to curb its negative impact on society.
“What we want to convey most is that fake news is a real problem, it’s a tough problem, and it’s a problem that requires serious research to solve,” said co-author Filippo Menczer, a professor at Indiana University’s School of Informatics, Computing and Engineering.
Menczer is also the founder of the IU Observatory on Social Media, a platform that offers tools for identifying automated “bots” on social media and examining the spread of fake news across these social networks. He told Courthouse News that the biggest obstacle in the battle against misinformation campaigns is access to data.
“Data is the first step in order to study what is going on,” he said. “We have some data from Twitter, that’s what we are studying and other people are studying, but the other platforms are more closed.
“The data can help us understand how many people are exposed and what are the factors that make us vulnerable.”
The report estimated that there are 60 million bots on Facebook and up to 48 million on Twitter – the latter figure is based on a recent study by Menczer and colleagues. The new research also cites analysis that found the average American likely came across one to three fake news articles in the month prior to the 2016 election.
“The spreaders of fake news are using increasingly sophisticated methods,” Menczer said. “If we don’t have enough quantifiable information about the problem, we’ll never be able to design interventions that work.
“This paper is really a call to groups across the globe – academics, journalists and private industry – to work together to attack this problem.”
The team says tech companies like Twitter, Facebook and Google have an “ethical and social responsibility transcending market forces” to aid scientific research on cyber misinformation.
Menczer believes that the plethora of online news sources and the algorithms social media platforms and other sites use distinguish cyber fake news from traditional misinformation campaigns.
“What is different, in my opinion, is that online social media and social networks and platforms are a bit easier to manipulate than what we had before,” he said. “The algorithms that determine what you see and what you don’t see can be gamed.
“What we had before was a smaller number of information sources, so people would know who they were talking to or who was writing the news that they were reading. But as the new media landscape has fragmented, and everybody can generate information online.”
The researchers also note fake news can extend beyond the political sphere, potentially reaching issues not typically regarded as political, including public health topics, vaccinations, the stock market and nutrition. They also say the issue is particularly challenging given that some studies have found that repeating a lie to attempt to correct it can actually lead to it being ingrained in a person’s mind.
To address this, the authors recommend extensive research into the effectiveness of high school courses that help students identify fraudulent news sources. They also advocate for specific changes to the algorithms that increasingly inform people’s access to information online.
“The challenge is there are so many vulnerabilities we don’t yet understand and so many different pieces that can break or be gamed or manipulated when it comes to fake news,” Menczer said. “It’s such a complex problem that it must be attacked from every angle.”
The research was funded by the John F. Kennedy School of Government at Harvard University and the Economic and Social Research Council in the United Kingdom.