Countless bots exist in the digital environment, from traditional spam bots that aggregate news content and distribute links to financial bots that advertise commercial products or attempt to influence financial markets. On social media platforms, automated social media accounts, known as social bots, are programmed to share certain content and interact with other users. Due to this automation, however, these bots have been shown to spread disinformation and manipulate online discussions.
Throughout the entirety of the former U.S. President Donald Trump’s first impeachment process, social media platforms were a battlefield for information between supporters and opponents of the president. Social bots played an increasingly important role during this time. Despite the fact that they represent less than 1 percent of all users, the social media bots posted over 30% of all impeachment-related content on X, formerly known as Twitter, according to a study, “Bots, disinformation, and the first impeachment of U.S. President Donald Trump,” led by Systems Engineering Ph.D. student Michael Rossetti and collaborator from Yale University, Tauhid Zaman.
This study, published in “PLOS One” in May 2023, aimed to detect automated social media accounts and assess their influence on a given political discussion, explicitly focusing on the first impeachment of President Trump. To accomplish this, the team collected over 67.7 million impeachment-related tweets from 3.6 million users. In addition to the prevalence of bots, they found bots share lower quality news, i.e., more disinformation, than humans but use less toxic language. Rossetti says this finding suggests their goal is to persuade, not agitate.
“I have been interested in public opinion research since I worked as a polling data analyst on a political campaign in 2012,” Rossetti said. “I am now interested in assessing political sentiments in online social networks. However, since the bots allow a small number of individuals to have an oversized influence on conversations, we need ways to cut through the noise to understand people’s opinions online.”
By quantifying the daily impact of the bots numerically, Rossetti and Zaman found that bot impact is highest on days with politically charged events, suggesting bots may be increasing online polarization. During the first impeachment of President Trump, there was much partisan agitation online, therefore increasing the bots’ impact. President Trump himself benefited most from this, receiving more than twice as many retweets from bots as the next highest beneficiary. While there are a greater number of pro-Trump bots, they found that on a per-bot basis, both pro- and anti-Trump bots have similar impacts.
In recent years, bad actors looking to spread targeted political messages online have taken advantage of the bots' ability to manipulate discussion and impact public opinion. An example of an extreme yet popular disinformation campaign is the QAnon conspiracy theory. The researchers found the prevalence of bots among QAnon supporters is around ten times greater than normal, which suggests malicious actors may be attempting to use artificial accounts to amplify QAnon content. However, they also found QAnon bots had less impact than normal bots due to the homophily of the QAnon follower network, meaning this disinformation is spread mainly within online echo chambers.
Overall, bots’ excessive reach and activity levels and propensity to share news from low-quality sources remain a cause for concern. This study demonstrates how a small number of bots can amplify specific stories or narratives, causing them to reach a large audience. By generating awareness of bots and a greater understanding of their behaviors and goals, Rossetti's work is helping to reduce the influence of bots in online discussions.