Despite the title, I did not blog this story because of any politics.
First, the author and I are friends, she sent me the link. We had lunch last week. I ate way too much and paid my share.
Second, it is a good discussion of algorithms, tweets, narratives, and other factors in marketing something.
Last, the issue of Russian bots, in this case, “likely” Russian bots, was a small factor but is still listed. This further illustrates one of my frustrations, that we cannot reliably track trolls, bots, and other ‘anonymous’ entities online. I guess it’s saved me from numerous attempted murder charges, when I just wanted to crawl through the wire and choke somebody…
In the days that followed the shooting, social media companies scrambled to deal with complaints about the proliferation of the crisis actors conspiracy across their platforms—even as their own algorithms helped to promote that same content. There were new rounds of statements from Facebook, YouTube, and Google about addressing the problematic content and assurances that more AI and human monitors must be enlisted in this cause.
But there are a lot of assumptions being made about how this content was amplified, and how it got past controls within the algorithmic star chambers. Russian bots, the NRA echo-chamber, and so-called alt-right media personalities have all been fingered as the perpetrators.
And, as our research group, New Media Frontier—which collects and analyzes social media intelligence using a range of custom and commercial analytical tools—recently outlined in an analysis of the #releasethememo campaign, there are many contributing factors to the amplification of American far-right content, including foreign and domestic bots, intentional amplification networks, and other factors. Whether it’s fully automated bot or semi-automated cyborg accounts, automation is a vital part of accelerating the distribution of content on social media.
But in looking at the case of the Parkland, Florida, shooting and the crisis actors narrative it spawned, there was another important factor that allowed it to leap into mainstream consciousness: People outraged by the conspiracy helped to promote it—in some cases far more than the supporters of the story. And algorithms—apparently absent the necessary “sentiment sensitivity” that is needed to tell the context of a piece of content and assess whether it is being shared positively or negatively—see all that noise the same.
This unintended amplification created by outrage-sharing may have helped put the conspiracy in front of more unsuspecting people. This analysis looks at how one story of the crisis actor conspiracy—the claim that David Hogg, a senior at Marjory Stoneman Douglas High School, was a fraud because he had been coached by his father—gained amplification from both its supporters and its opponents.
Some of the better-established networks almost seem to predict what will become a trending story because of the position they occupy in the information architecture of social networks—selecting specific content and then ensuring its amplification.
The crisis actors narrative was being amplified on other platforms, as well. The promotion of stories being aggressively pushed by far-right conspiracy sites raised alarms. YouTube had to intervene to remove a video promoting the crisis actor conspiracy that topped its trending algorithm. Meanwhile Google and Twitter searches were auto-filling “crisis actors” as a search term. FaceBook and Reddit were also being used to promote versions of the story.
However, this trending content was not pushed solely from the right. At 6:21 pm, Frank Luntz (@frankluntz, a prominent pollster and PR executive with almost 250,000 followers) tweeted in protest of the Gateway Pundit story, becoming one of four non-right-wing amplifiers of the story with verified accounts. (In most cases, getting content seen by or promoted by verified accounts greatly accelerates its amplification.) The other three are the New York Times’ Nick Confessore, MSNBC producer Kyle Griffin, and former first daughter Chelsea Clinton. Each of them quote-tweeted the Gateway Pundit story to denounce it, but in doing so gave it more amplification.
By the next morning, the Gateway Pundit story had been promoted roughly 30,000 times on Twitter. These four progressive influencers were responsible for more than 60 percent of the total mentions of the story.
This is a limited example, but it shows quite clearly that this one conspiracy, on one platform, was amplified not by its supporters but—unintentionally—by its opponents.
These curated newswires are important players in synthetic information networks—parts of social media that are populated by content even when human users are not engaged. The reposted content helps stories trend; it also lays the groundwork for what human users see when they tune in to their Twitter feeds, where Twitter’s algorithms also helpfully provide content you may have missed while you were away.
To get a snapshot of some of the automation in both silos, right and left, we looked at the first 10 accounts to retweet Gateway Pundit founder John Hoft’s original tweet of the article (@rlyor, @ahernandez85b, @mandersonhare1, @dalerogersL2528, @topdeserttrader, @jodie4045, @Markknight45, @James87060132, @AIIAmericanGirI, @deplorableGOP13) and at the first 10 accounts to retweet Chelsea Clinton’s denunciation of the story (@DOFReport, @AndrewOnSeeAIR, @TheSharktonaut, @CarolynCpcraig, @guavate86, @NinjaPosition_, @Jjwcampbell, @mikemnyc, @intern07, @maximepo1).
In January 2018, the right-leaning accounts collectively tweeted 42,654 times (that’s an average of about 140 tweets a day per account), a fair indicator that at least some of them are automated amplifiers. The largest of these accounts—@AIIAmericanGirI—has tweeted 542,000 times since 2013 (10,000 tweets a month, or more than 300 per day). Her 115,000 followers include Harlan Hill, Charlie Kirk, Tea Pain, Bill Kristol, Mike Allen, and Sarah Carter—all widely followed individuals who help shape opinion across the political spectrum on social media.
On the left, the profiles of automated accounts look similar. In January 2018, the 10 accounts that retweeted Chelsea Clinton’s denunciation collectively tweeted 36,063 times (roughly 116 tweets per day per account). The first retweet was from a self-labelled news aggregator (a newswire-style account that retweets the former first daughter as part of its automated tasking). Another, @TheSharktonaut, which retweets a high volume of left-leaning content, is followed by Democratic lawmakers and candidates—a left version of Roy.
@AndrewOnSeeAIR’s Twitter biography claims he is British and anti-Brexit, but this account uses a hashtag meant to create a “follow-back” network amongst anti-Brexiteers—that is, it’s designed to improve follower counts in both directions. His tweets—more than 200 a day—consist almost entirely of left-leaning American content, despite his claim of being British.
Right and left, there is a pattern of full and partial automation and amplification. But in this case, the accounts on the left have relatively more modest followings and less well-established positions within the broader information architecture of Twitter. The left has far more verified followers (more than 500); on the right, it’s closer to 200-plus. In some ways, there is the temptation to see a reflection of the party engagement strategies in these information tactics: one side more focused on broader support, while the other is more reliant on a tighter group of elites to achieve the same effect.
The truth in the crisis actors case was less clearcut and less glamorous than either side of the debate would like to admit. Bots, including likely Russian bots, were promoting both narratives and remain essential elements of computational propaganda, the tactics of which are being used more frequently on social media.
Automation, in a variety of forms, is deeply entrenched in social media’s information landscape. Automated accounts traffic information and impact what we see online, either directly or through their impact on algorithms. Algorithms curate and promote information in ambiguous and sometimes unhelpful ways. Over and over, human intervention is needed to correct the “judgment” of algorithms. And this feels, to some audiences, like a new form of censorship.
Social media companies have started to step in to correct the excesses and unintended consequences of automation, but that happens only on a case-by-case basis, particularly in high-profile cases of disinformation and defamation. Responding in this way will increasingly raise questions about who is deciding which automation is bad automation, and which is allowed to continue unchecked. It also leaves regular, everyday users exposed to the same types of defamation campaigns but with far less protections or means of recourse.
Sometimes there is the sense that this is just the new way to consume information and we all need to figure out how to navigate it. That whatever is loudest is somehow what is most important, and after that, figure it out on your own. On Reliable Sources this past weekend, David Hogg himself said he wasn’t upset by all the conspiracies because they were all great “marketing,” boosting his twitter following to more than 250,000 people. The younger and the more social media savvy seem to understand this more mercenary approach instinctively. It’s the wild-west landscape that social media platforms have encouraged, knowing that outrage is an effective currency in the so-called attention economy.
This terminology camouflages the war for minds that is underway on social media platforms, the impact that this has on our cognitive capabilities over time, and the extent to which automation is being engaged to gain advantage. The assumption, for example, that other would-be participants in social media information wars who choose to use these same tactics will gain the same capabilities or advantage is not necessarily true. This is a playing field that is hard to level: Amplification networks have data-driven, machine learning components that work better with refinement over time. You can’t just turn one on and expect it to work perfectly.
The vast amounts of content being uploaded every minute cannot possibly be reviewed by human beings. Algorithms, and the poets who sculpt them, are thus given an increasingly outsized role in the shape of our information environment. Human minds are on a battlefield between warring AIs—caught in the crossfire between forces we can’t see, sometimes as collateral damage and sometimes as unwitting participants. In this blackbox algorithmic wonderland, we don’t know if we are picking up a gun or a shield.
The Power of Social Media
- Russian bots flooded the internet with pro-gun tweets as soon as news of the Parkland shootings broke.
- But in a matter of days, writes Virginia Heffernan, the students from Parkland were turning their grief into effective activism.
- The author of this article, Molly McKew, wrote another WIRED piece arguing that it’s now undeniable that Russian info-warriors affected the outcome of the 2016 presidential election.
Molly K. McKew (@MollyMcKew) is an expert on information warfare and the narrative architect at New Media Frontier. She advised Georgian President Mikheil Saakashvili’s government from 2009 to 2013 and former Moldovan Prime Minister Vlad Filat in 2014-15. New Media Frontier co-founder Max Marshall (@maxgmarshall) helped conduct the research for this analysis.