Social media, for all of it’s pros and cons, has given everyone a seat at the table. It has allowed each user to voice their opinion and cast their proverbial vote upon any issue that they see fitting. In this idealistic way, it can be seen as a thoroughly democratic medium – an open forum where the views of the individual are counted, discussed, and critiqued.

Between June 5 and June 12, 2016, 1.5 million tweets were cast on the subject of Brexit. Of them, 54% were pro-leave, 20% pro-remain, and 26% were neutral.

But all is not right in this technocratic utopia. As with politics throughout the ages, the medium of democratic expression has been infiltrated and corrupted. In this new age, the culprits are called Twitterbots, and they may have just had a significant impact on two of this century’s biggest political upheavals.

Twitterbots are software applications designed to run automated tasks on the social media site. They come in many forms, from spam bots that will hijack your tweet with promotional links, to bots that provide comedic retorts to a common phrase or hashtag in your post. All in all, to the relatively internet-savvy, nothing too harmful to worry about.

However, a recent study on the use of bots during the Brexit campaign revealed some rather more insidious results. The study, created by researchers Philip Howard and Bence Kollanyi, revealed that an alarming number of bots were active in the run up to voting day. Between June 5 and June 12, 2016, 1.5 million tweets were cast on the subject of Brexit. Of them, 54% were pro-leave, 20% pro-remain, and 26% were neutral.

Britain opted to leave the EU in 2016, could Twitter have influenced opinion?

The report shows that one third (half a million) of the tweets came from less than 1% of the 313,000 accounts sampled. This is an alarming statistic because it is improbable for such a low percentage of human beings to create so many tweets in this space of time. It becomes apparent that bot technology is in play.

And it wasn’t just in Britain that this new phenomenon was being used. During the fierce 2016 election cycle between Donald Trump and Hillary Clinton, both sides were accused of using bot technology in their social media campaigns.

Trump, no stranger to Twitter and a regular provocateur of the page, holds some 20 million followers on the website. His tweets always garner attention, and a staggering amount of retweets, whether they are the furious ramblings of a man scorned or a generic advert for an upcoming television appearance.

Such interest in trivial tweets prompted Patrick Ruffini, a political digital consultant, to post a spreadsheet detailing nearly 500 pro-Trump accounts that happened to post in unison, the same message, on the same subject. They called for voters to file FCC complaints against, of all things, robocalls from the Cruz campaign. The irony would be comedic, if it weren’t for it’s tragic implications on the nature of democracy. If that wasn’t enough, it turned out that many of those accounts had previously tweeted on the subject of “Marketing Tips for B2B Websites” – another indicator that they were, indeed, Twitterbots.

This ugliness doesn’t just reside on the red half of the political spectrum in America. Clinton has also been accused of having over a million ‘fake’ followers – which can be defined as either inactive Twitter users, fake accounts, or spam bots.

During the third presidential debate, which closely followed the leaked footage of Trump’s boasting of sexual assault on women, his Twitterbots shared Trump related information that outnumbered Clinton’s bots 7 – 1. It begs the question – in what other day and age could a man be heard to speak such words just months before going on to win the most important job in the world?

If bots are to play a role in our perception of popularity, then it is important that we begin to understand why this could be a problem.

French philosopher René Girard developed the idea of mimetic theory. It is a notion built on the concept that human culture is ultimately imitative – that is, at it’s most basic level, we are influenced by each other. He is not the only philosopher to study the ‘herd mentality’ of human beings, nor to analyze the problematic nature of ‘the crowd’ – Kierkegaard, Nietzsche, and Freud have all attested to similar analyses.

The concept of the ‘Information cascade’ puts human decision making in direct relation to that of others, as economist David Eastley states, a cascade develops when people “abandon their own information in favor on inferences based on earlier people’s actions”.

It is easy to see the apparent dangers when you apply the above theories to something like Twitter, a world of supposedly quantifiable popularity.

If we analyze the Brexit statistic that 54% were pro-Leave, 20% pro-Remain, and 26% neutral, we can clearly see that if that 26% could be convinced to vote either way, they would decide the referendum. During that week, the Brexit bots were three times as active as the Remain bots. Britain voted to leave.

If bots are to play a role in our perception of popularity, then it is important that we begin to understand why this could be a problem.

Now, to look at the American presidential race – both Trump and Hillary are guilty of using bots to further their campaign, and both share similar amounts of suspected fake followers. Trump, a master manipulator, clearly wins the game of ‘he who shouts loudest gets heard first’. His tweets become headline news and he is given the publicity he craves.

1.7% of Sanders retweets were from bots, half the rate of Trump and Clinton

But this is where it gets interesting. Andrew McGill of The Atlantic ran a test using BotOrNot, a program that is designed to determine whether or not a Twitter account is likely to be an active human being or an account ran by a computer. He found that each of the three major candidates of the time, Trump, Clinton, and Bernie Sanders, had more than three quarters of their retweets conducted by active users – human beings.

But his findings also show that although Sanders generates around about the same amount of retweets for his posts as Trump, his are less likely to be generated by bots. Of his tweets, 1.7% of retweets failed the bot test, half the rate of Trump and Clinton.

This means that when a Bernie had a tweet that was trending, it was more likely to be generated by actual human beings, and therefore more authentic, than that of his two rivals.

This is important because it highlights the complexities involved in the analysis of social media, something which is all too easy to dismiss when attempting to summarize a story into a headline or snippet.

And this becomes the crux of the matter. Although a medium like Twitter has a relatively small user base (around 9.6% in the UK, 24% in the USA), it is often used and cited by journalists and newscasters as accurate representations of their interests and opinions. This trend will only grow as more people adopt social media as their medium for news, politics, and culture, and the  complexities of false users and misinformation will deepen.

There is currently little incentive for self regulation by Twitter, whose user numbers are increased, or by the politicians, for whom the bots strengthen their online rapport.

Inevitably, for every bot fighting filter that is created, the more sophisticated bot technology will become.

The real solution is information. Both Brexit and the American election need to be catalysts for the public to become educated in this new technology, thereby forcing the media platforms and politicians to take note and begin to govern their own bodies. Journalists will need to be thorough in their research, and avoid the temptation to cherry pick numbers that will ultimately provide false accounts. It is up to us, as individuals, to inform others as to the existence of this technology.

A populace equipped with the ability to make informed decisions within our technological minefield will become the new cornerstone of our democracy.

Like? Share!