(CN) — A method developed by researchers for identifying behavioral differences between human-run social media accounts and those managed by artificial intelligence could shape machine-learning algorithms that detect bot-operated accounts, according to a study released Tuesday.
Social media accounts controlled by bots, as opposed to those operated by humans, are managed by artificial intelligence software.
Bot-operated accounts can serve various functions online, ranging from news aggregation to automated customer service for companies.
Some bot accounts are hard to distinguish from human ones, a characteristic that lawmakers and democracy watchdog groups note in their complaints regarding the role bots have played in election campaigns and public opinion manipulation schemes.
In the U.S., bot-managed accounts promoted postings from hate groups such as white nationalists, while in Israel bots promoted messaging from the leading political party and disseminated posts that smeared a leading political figure.
But University of Southern California researchers have developed a method of identifying social media behavior that could make it easier to detect bots, according to a study published in the journal Frontiers in Physics.
Human social media users’ online activity displays a “signature” or short-term behavioral trends that are absent in bot-operated accounts, according to the study.
That signature is then plugged into an algorithm that can develop bot detection software, according to a statement by study co-author Emilio Ferrara, computer science professor at the USC Information Sciences Institute.
“Remarkably, bots continuously improve to mimic more and more of the behavior humans typically exhibit on social media,” Ferrara said in a statement accompanying the study. “Every time we identify a characteristic we think is prerogative of human behavior, such as sentiment of topics of interest, we soon discover that newly-developed open-source bots can now capture those aspects.”
The study tracked activity of bot and human Twitter accounts during the 2017 French presidential election, Ferrara said in an email, including by measuring their propensity to engage with other users and by the amount of content they posted.
Ferrara and his team tracked the activity of bot and human Twitter accounts throughout recent political events, including by measuring their propensity to engage with other users and by the amount of content they posted.
Researchers were able to draw distinctions between bot and human accounts by analyzing the length of their tweets, retweets, replies and by the quantity and quality of their interactions with other Twitter users.
Machine learning techniques were then used to train two bot detection systems; one that had research data plugged into it and one that did not, in order to establish a baseline for the study.
Overtime, users’ activity formed a behavioral signature used in a classification system for bot detection, the study said.
Humans were more likely to increase their interaction with other social media users over the course of a session, including by replying to and retweeting other accounts, researchers found.
Human-operated accounts also posted less content over time and showed a decrease in the average length of tweets, the study found.
Researchers speculate that humans’ behavioral trends on Twitter are tied to our inability to sustain complex activity, such as continuously tweeting original content, after growing tired near the end of long online sessions.
As sessions continued and Twitter users were exposed to more content, humans were more likely to interact with tweets whereas bots were not affected by the increased exposure, the study found.
The bot detection system that had behavioral signature data plugged into it significantly outperformed the baseline system in the area of detection accuracy, the study said.
Researchers said in a statement the study demonstrates social media behavioral signatures can be used to improve existing bot detection software.
“Bots are constantly evolving — with fast paced advancements in AI, it’s possible to create ever-increasingly realistic bots that can mimic more and more how we talk and interact in online platforms,” Ferrara said. “We are continuously trying to identify dimensions that are particular to the behavior of humans on social media that can in turn be used to develop more sophisticated toolkits to detect bots.”
Ferrara said researchers focused exclusively on Twitter users and that the public should not be wary of all bot-controlled accounts on the platform.
“That an account is a bot, in itself, is not necessarily an issue,” Ferrara said. “What matters is whether that bot is being adopted as part of an influence operation, such as that carried out in 2016 by the Russian Internet Research Agency: it’s well documented that they adopted over fifty thousand bots.”