To the best of my knowledge, the United States government does not employ “opinion shapers”, or paid trolls. At least in the conventional world.
I know of several projects where up to 40 fake personalities were created per user. I do not know the classification of their use, however, but the initial RFI was unclassified. I also do not know the number of users involved, so the scale of the project is also unknown. This was quite a few years ago. I also know several similar projects were in use by the IC. Again, all is unknown. An indicator of the IC using these projects is when there is little or no “past performance” when the capability is being marketed, especially if the capability is “mature”.
[Note: I received feedback, today, that the IC is involved in, has planned, and discussed their options in many IO/IW type operations. To date, I have purposefully not addressed IC operations, but now I will include IC-type operations in my commentary.]
I do know paid professional trolls were used by many, perhaps all, candidates in the 2016 presidential election. Perhaps even the parties, themselves, paid trolls to shape opinions. I have confirmed at least one party used paid trolls and it appears to be common sense that more than one party used paid trolls. We also observed trolls in use as a part of the Brexit vote and the French election.
The main issue highlighted in the report is the use of bots to push a position.
One thing missing in the report, and I would not expect it to be covered in depth, is that of effectiveness. How effective are trolls and bots is an unanswered question. There have been no studies and there is seemingly no impetus to establish the effectiveness of trolls and bots. There is such an outcry about trolls, bots, and fake news, however, that there is a need for such a study to be performed. The biggest problem, however, is to determine which trolls and bots might provide a baseline and which can be studied, long-term. The Hamilton 68 project appears to be loosely based on what appears to be assumed effectiveness, their methodology remains a mystery.
Alex Hern @alexhern
Tuesday 14 November 2017 05.43 EST
The governments of 30 countries around the globe are using armies of so called opinion shapers to meddle in elections, advance anti-democratic agendas and repress their citizens, a new report shows.
Unlike widely reported Russian attempts to influence foreign elections, most of the offending countries use the internet to manipulate opinion domestically, says US NGO Freedom House.
“Manipulation and disinformation tactics played an important role in elections in at least 17 other countries over the past year, damaging citizens’ ability to choose their leaders based on factual news and authentic debate,” the US government-funded charity said. “Although some governments sought to support their interests and expand their influence abroad, as with Russia’s disinformation campaigns in the United States and Europe, in most cases they used these methods inside their own borders to maintain their hold on power.”
Even in those countries that didn’t have elections in the last year, social media manipulation was still frequent. Of the 65 countries surveyed, 30, including Venezuela, the Philippines and Turkey, were found to be using “armies of opinion shapers” to “spread government views, drive particular agendas, and counter government critics on social media”, according to Freedom House’s new Freedom on the Net report. In each of the 30 countries it found “strong indications that individuals are paid to distort the digital information landscape in the government’s favour, without acknowledging sponsorship”.
That number has risen every year since the first report in 2009. In 2016, just 23 countries were found to be using the same sort of pro-government “astroturfing” (a fake grassroots movement). Recently “the practice has become significantly more widespread and technically sophisticated, with bots, propaganda producers, and fake news outlets exploiting social media and search algorithms to ensure high visibility and seamless integration with trusted content,” the report says.
“The effects of these rapidly spreading techniques on democracy and civic activism are potentially devastating … By bolstering the false perception that most citizens stand with them, authorities are able to justify crackdowns on the political opposition and advance anti-democratic changes to laws and institutions without a proper debate.”
The report describes the varied forms this manipulation takes. In the Philippines, it is manifested as a “keyboard army” paid $10 a day to operate fake social media accounts, which supported Rodrigo Duterte in the run-up to his election last year, and backed his crackdown on the drug trade this year. Turkey’s ruling party enlisted 6,000 people to manipulate discussions, drive agendas and counter opponents. The government of Sudan’s approach is more direct: a unit within the country’s intelligence service created fake accounts to fabricate support for government policies and denounce critical journalists.
“Governments are now using social media to suppress dissent and advance an anti-democratic agenda,” said Sanja Kelly, director of the Freedom on the Net project. “Not only is this manipulation difficult to detect, it is more difficult to combat than other types of censorship, such as website blocking, because it’s dispersed and because of the sheer number of people and bots deployed to do it.”
“The fabrication of grassroots support for government policies on social media creates a closed loop in which the regime essentially endorses itself, leaving independent groups and ordinary citizens on the outside,” Kelly said.