Press "Enter" to skip to content

Fake News is Real: The Rise of Computational Propaganda and Its Political Ramifications

“Fake News” — two words that have become synonymous with Donald Trump and his 2016 bid for the presidency. Some wholeheartedly believed it, some cast it aside as irrelevant, and others avidly denied it. Yet, President Trump was right. Fake news is real, but not necessarily in the way that most imagine.

In the 2016 election, nearly 20 percent of all political tweets were not written by human beings, but rather by bots. One tweet reads “#NeverTrump Those fake, nonsense polls are actually real, good polls, Trump’s spokesman insists — Campaign of lies.” Serving as just one example of a tweet written by a bot, throughout the 2016 election, both President Trump and Secretary Clinton had bots tweeting in their favor at any given point in time.

Spreading across all social media platforms — commentary about debates, posts about the candidate, critiques on heated issues — “fake news” and the dissemination of computational propaganda has become the norm rather than the exception. Anyone from a 15-year-old on their personal computer to a Super-PAC with an ambitious agenda and funding to match can design and program a bot with the purpose of doing just about anything.

On Twitter, computational propaganda takes the form of an account that automatically tweets and retweets commentary on the election process. So what is the issue with this? According to Clayton A. Davis, a researcher at Indiana University, Twitter bots can “act as a megaphone on social media.” Davis suggests that “We as humans tend to say, the more people talking about something, the more likely it is to be true. We know that that’s false, but that’s just how we work. You not only add volume, but you lend credibility to the message, when in reality, it’s really only one person.” By artificially increasing the amount of conversation around something, people are more likely to believe that the statement is true. In the process, Twitter bots “lend credibility” to whatever “message” its controller or creator intended it to disseminate. For example, Andres Sepulveda, a political hacker with a 30,000-strong Twitter bot army, claims to have rigged political elections for the past decade throughout Latin America, most notably the election of Enrique Pena Nieto as President of Mexico. Though an example of great magnitude, Sepulveda’s effectiveness as a force to curb public opinion illustrates that potency of computational propaganda in altering political processes, particularly in states with democratic elections.

Figure 1: Andrés Sepúlveda, a political hacker who rigged political elections throughout Latin America for the past decade (Bloomberg)
Figure 1: Andrés Sepúlveda, a political hacker who rigged political elections throughout Latin America for the past decade. Source: Bloomberg

 

This dissemination of misinformation is particularly problematic when it misleadingly adds — or detracts — credibility for each individual candidate in the 2016 election. For example, when running a bot-detection software on both President Trump‘s and Secretary Clinton‘s personal accounts, about 59 percent and 50 percent of followers respectively were detected to be bots. This then works to add a sense of legitimacy or a heightened sense of popularity to each candidate, a legitimacy President Trump touts as proof of his popularity among American citizens. In contrast, however, former presidential candidate, Bernie Sanders, had a fake following of about 39 percent, a stark difference between the 50 percent fake following of his competitor, Hillary Clinton. Meanwhile, on the Republican side, only 17 percent of Ted Cruz’s followers and 15 percent of Marco Rubio’s followers are fake, even more greatly contrasting with Donald Trump’s 59 percent.

Could these statistics be an explanation for why Bernie Sanders lost the primary to Hillary Clinton? Could this difference help paint a better picture to understand Donald Trump’s surprising surge within the Republican party? With the rising popularity of social media such as Twitter for news reception, the answer is almost certainly yes. This is particularly clear when coupled with allegations that both Hillary Clinton’s and Donald Trump’s campaigns intentionally recruited bots in order to build a more savvy social media platform around their ideas and campaign. Whether this accusation has any merit or not, however, is unclear. However, one thing is certain — there is a difference in pro-Clinton and pro-Trump Twitter bots. While the rate of tweeting was similar in both, pro-Trump bots were more sophisticated in design in that they incorporated more believable Twitter-savvy patterns such as including pictures and hashtags, elements that would decrease the ability of someone to detect that tweet or account as a bot.

Likewise, on Facebook, misinformation in the form of computational propaganda is also being disseminated, though it is not as pervasive as on Twitter. But despite it not being as common as on Twitter, “fake news” on Facebook is far more detrimental than bot-created Twitter traffic. While Twitter bots may retweet or tweet, Facebook computational propaganda takes the form of entire articles that are designed to look real being scattered throughout a user’s feed. Many liberal commentators, for one, have cited this phenomena as a factor that contributed to President Trump’s victory in the 2016 election. Citing how Facebook articles claimed such things as Trump being endorsed by the Pope, commentators believe that this could have swayed the voting positions of some Facebook users in favor of Donald Trump. With that said, however, others accuse Facebook of having a more liberal bias and instead being a platform for the dissemination of liberal conspiracy theories, among other liberal fake news articles. While the accusations are mixed between Facebook fake news being conservative or liberal, the truth remains that computational propaganda on Facebook is especially dangerous when coupled with the fact that many Facebook users, particularly millennials, are increasingly unable to discern “fake news” from real news.

A group of researchers at Stanford University conducted a study to this effect, particularly focusing on whether or not millennials are able to distinguish between legitimate and illegitimate news sources and articles. The answer was resoundingly no. The study, spearheaded by Stanford Professor, Sam Wineburg, demonstrated that “in every case and at every level, we were taken aback by students’ lack of preparation.” In this study, roughly 80 percent of millennials incorrectly identified a fake article as real, a staggering figure that demonstrates that perhaps social media sites should have some accountability for the content published using its platform.

Additionally, Facebook presents another issue in regard to the dissemination of fake news — foreign intervention disguised as domestic political activism. For example, Facebook pages like Secured Borders, which disseminated a great deal of anti-immigration rhetoric and information during the 2016 election, and another page called LGBT-United has been identified by federal investigators to be “part of a highly coordinated disinformation campaign linked to the Internet Research Agency, a secretive company in St. Petersburg, Russia.” Addressing the failure of Facebook to stop such foreign intervention, Mark Zuckerberg, CEO of Facebook comments that “many of these dynamics were new in this election, or at much larger scale than ever before in history.” He adds that Facebook “will continue to work to build a community for all people” and do its “part to defend against nation states attempting to spread misinformation and subvert elections.”

However, the United States is not alone in this dilemma. In Russia, Ukraine, the UK, Brazil, Venezuela, and China to name a few, the trend is the same: automated social media posts reign. During Brexit, nearly one quarter of all tweets were written by bots. In Venezuela and Russia, that sum reaches to nearly 30 percent — all with the intention of shaping and changing public opinion via social media.

So what can be done about this? Twitter, for one, is already issuing new protocols to reduce bot-traffic by partnering with third-parties to help detect and shut down bot-run accounts. For example, Chengkai Li, a professor of Computer Science and Engineering at the University of Texas at Arlington has focused his research on developing a Twitter bot capable of detecting fake news bots. Likewise, Facebook is launching an investigation, spearheaded by Mark Zuckerberg, to address fake news much like Twitter has been doing as well. Partnering with third parties as well as developing software of their own, Facebook and its spokespeople claim to be focusing their efforts on combating computational propaganda.

But as social media sites attempt to combat this dissemination of computational propaganda, the designers of bots are getting more creative. While the Age of Information has brought forth a revolution in the way we do just about everything — from mobile computing and data-sharing to using social media networks — there has also been an attendant increase in misinformation. When it comes to utilizing social media and more broadly, the internet, it is vastly more difficult to discern truth from lies, making fake news a serious problem — one that only stands to get worse.

Featured Image Source: Wired

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *