By Alex Stamos, Chief Security Officer
There have been a lot of questions since the 2016 US election about Russian interference in the electoral process. In April we published a white paper that outlined our understanding of organized attempts to misuse our platform. One question that has emerged is whether there’s a connection between the Russian efforts and ads purchased on Facebook. These are serious claims and we’ve been reviewing a range of activity on our platform to help understand what happened.
In reviewing the ads buys, we have found approximately $100,000 in ad spending from June of 2015 to May of 2017 — associated with roughly 3,000 ads — that was connected to about 470 inauthentic accounts and Pages in violation of our policies. Our analysis suggests these accounts and Pages were affiliated with one another and likely operated out of Russia.
We don’t allow inauthentic accounts on Facebook, and as a result, we have since shut down the accounts and Pages we identified that were still active.
- The vast majority of ads run by these accounts didn’t specifically reference the US presidential election, voting or a particular candidate.
- Rather, the ads and accounts appeared to focus on amplifying divisive social and political messages across the ideological spectrum — touching on topics from LGBT matters to race issues to immigration to gun rights.
- About one-quarter of these ads were geographically targeted, and of those, more ran in 2015 than 2016.
- The behavior displayed by these accounts to amplify divisive messages was consistent with the techniques mentioned in the white paper we released in April about information operations.
In this latest review, we also looked for ads that might have originated in Russia — even those with very weak signals of a connection and not associated with any known organized effort. This was a broad search, including, for instance, ads bought from accounts with US IP addresses but with the language set to Russian — even though they didn’t necessarily violate any policy or law. In this part of our review, we found approximately $50,000 in potentially politically related ad spending on roughly 2,200 ads.
We have shared our findings with US authorities investigating these issues, and we will continue to work with them as necessary.
Authentic Activity Matters
We know we have to stay vigilant to keep ahead of people who try to misuse our platform. We believe in protecting the integrity of civic discourse, and require advertisers on our platform to follow both our policies and all applicable laws. We also care deeply about the authenticity of the connections people make on our platform.
Earlier this year, as part of this effort, we announced technology improvements for detecting fake accounts and a series of actions to reduce misinformation and false news. Over the past few months, we have taken action against fake accounts in France, Germany, and other countries, and we recently stated that we will no longer allow Pages that repeatedly share false news to advertise on Facebook.
Along with these actions, we are exploring several new improvements to our systems for keeping inauthentic accounts and activity off our platform. For example, we are looking at how we can apply the techniques we developed for detecting fake accounts to better detect inauthentic Pages and the ads they may run. We are also experimenting with changes to help us more efficiently detect and stop inauthentic accounts at the time they are being created.
Our ongoing work on these automated systems will complement other planned projects to help keep activity on Facebook authentic. We’re constantly updating our efforts in this area, and have introduced a number of improvements, including:
- applying machine learning to help limit spam and reduce the posts people see that link to low-quality web pages;
- adopting new ways to fight against disguising the true destination of an ad or post, or the real content of the destination page, in order to bypass Facebook’s review processes;
- reducing the influence of spammers and deprioritizing the links they share more frequently than regular sharers;
- reducing stories from sources that consistently post clickbait headlines that withhold and exaggerate information;
- and blocking Pages from advertising if they repeatedly share stories marked as false.
We will continue to invest in our people and technology to help provide a safe place for civic discourse and meaningful connections on Facebook.