Forget cat videos and carefully staged vacation snapshots — social media’s real killer app is spreading propaganda, disinformation and fake news, according to a new study from the University of Oxford’s Computational Propaganda Research Project.
The CPRP researchers examined the spread of news and opinion with covert support from governments or political parties on social media in nine countries, and found that social media is now a central pillar of the state’s efforts to shape and control public opinion at home and abroad.
The study analyzed usage patterns for seven big social media platforms in the U.S., China, Russia, Poland, Brazil, Canada, Germany, the Ukraine, and Taiwan, focusing on social media activity around major events including elections, political crises, and national security incidents. In every country they found evidence of “computational propaganda” efforts, defined as “the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks.”
For example, in Russia the researchers discovered the conversation on Twitter is carefully managed by automated accounts: in fact, 45% of all Twitter activity in Russia is produced by bots, with many of them apparently under the control of government organs, while Twitter discussion in Poland is controlled by a small group of highly active right-wing and nationalist accounts powered by bots.
Similarly, in Brazil the researchers found that computational propaganda techniques were used to influence a number of major events in recent years, including the 2014 presidential election, the impeachment of president Dilma Rousseff, and the municipal elections in Rio de Janeiro last year.
In addition to managing public opinion in their own countries, authoritarian regimes are also employing computational propaganda against other countries, the Oxford study found: Russia unleashed a social media disinformation campaign against Ukraine during Russian-supported uprisings in the Crimea and Ukraine, while the Chinese government has targeted political actors in Taiwan.
The same techniques work in democracies, notably during political campaigns, and are employed by all sides, although with different levels of success. After analyzing 17 million tweets in November 2016, the researchers determined that the “botnet” supporting Donald Trump was three times the size of the one supporting Hillary Clinton, with bots of all affiliations accounting for 10% of the sample activity.
Although automation plays a major role in the success of social media propaganda campaigns, the researchers conclude that the human element — in the form of curation — is still vital, with the best results produced by “bots and trolls working together.” By the same token, bots are also a force-multiplier for fringe groups or splinter movements: “One person, or a small group of people, can use an army of political bots on Twitter to give the illusion of large scale consensus.”
Of course, the effects of computational propaganda are not limited to the nine countries named in the study. Thus, the report notes that during the lead-up to the Brexit referendum in June 2016, less than 1% of Twitter accounts generated around a third of all messages, with hashtags supporting a vote to “leave” clearly predominating.