In the digital age, social media has become a vital platform for activism, with individuals and groups rallying support for issues like climate change. However, recent research reveals that “social bots” - automated accounts designed to imitate human behavior - are significantly altering these online discussions. By focusing on Extinction Rebellion (XR), a prominent climate activist movement, researchers examined how bots interact with human users, influencing sentiment and potentially skewing public perception of climate activism. Surprisingly, the study found that while these bots may decrease positive sentiment, they don’t necessarily reduce engagement levels, meaning users continue to post and interact, even if their tone changes.
What Are Social Bots and Why Do They Matter?
Social bots are automated accounts programmed to post and interact on social media, often with the intent of mimicking real users. These bots can rapidly disseminate information, retweet each other’s posts, and even target specific users, creating the illusion of widespread support or dissent. In political contexts, bots are frequently deployed to shape public opinion subtly. For instance, during elections and large protests, social bots can amplify certain voices or stir up controversy, sometimes obscuring genuine human perspectives.
In this study, researchers explored how bots participated in the climate change conversation around XR protests on Twitter (now X) in 2019. Bots contributed to nearly half of the tweets related to XR, using various tactics to influence users’ views on climate activism. By comparing human users who interacted directly with bots to those who didn’t, researchers uncovered distinct differences in sentiment and engagement.
How Social Bots Affect Sentiment and Engagement
One of the study’s most striking findings is that bots generally decrease positive sentiment among users who interact with them. Human users with neutral or supportive views of climate activism reported more negative sentiments after interacting with bots, while those who opposed XR showed minimal change in their sentiment. This suggests that bots might strategically target users who are undecided or supportive, aiming to reduce the overall positivity in discussions about climate activism.
Interestingly, while bots impacted sentiment, they didn’t necessarily decrease user engagement. In fact, political bots - known as “astroturfing bots” because they create a false impression of grassroots support - tend to provoke more engagement, leading users to post more frequently after an encounter. In contrast, interactions with less politically motivated bots, like spam accounts, often led to decreased activity, suggesting that engagement depends on the type of bot users encounter.
Uncovering Patterns in Bot-Human Interactions
To understand how bots shape conversations over time, researchers studied “information cascades,” where users retweet or reply to the same message, amplifying it across the network. Bots were found to be active participants in these cascades, often influencing their direction and sentiment. During peak moments of protest-related discussion, bots were highly engaged, retweeting each other to reinforce specific messages. This creates an echo chamber effect, where bots bolster particular narratives, often making it challenging for users to distinguish between genuine support and manufactured sentiment.
By influencing these information cascades, bots can control the flow of conversations on social media, nudging users toward more extreme viewpoints or fostering negativity around sensitive topics like climate change. For users passionate about environmental issues, this distortion can be particularly discouraging, affecting morale and potentially impacting offline activism.
Implications for Activism and Social Media Policy
The study’s findings highlight the need for greater transparency on social media platforms. As bots become increasingly sophisticated, identifying them becomes more difficult, yet their impact on public discourse remains profound. With large language models making bots even harder to distinguish from real users, platforms like X may need to enforce stricter policies to monitor bot activity and protect authentic engagement.
For activists, understanding bot influence could help in developing strategies to counteract negative effects and maintain momentum. As bots appear to be more effective at influencing undecided or neutral users, activists might focus on positive reinforcement and targeted messaging to uphold public support. Additionally, policymakers could consider legislation requiring platforms to disclose the presence of bots, especially in political and social issue discussions.
This study underscores the growing impact of automated accounts in shaping public opinion, showing that bots don’t just spread information but can subtly alter the mood and tone of entire movements. As climate activism and other social movements rely increasingly on digital platforms, recognizing and mitigating the influence of bots will be essential to maintaining genuine, impactful discussions online.