Fake News · Information operations · Information Warfare

UK Fake news inquiry – publications


The UK is fully invested in an investigation into fake news.

This is about investigating fake news, this is just the discovery phase. Recommendations will follow the investigative report.

An Interim Report was published on 29 July 2018. Further report in preparation.

Published today, this should have the widest dissemination. Well worth reading!

Written evidence submitted by By Dr Carlo Kopp, Dr Kevin B. Korb, Dr Bruce I. Mills

Written Evidence to the Inquiry on Disinformation and ‘fake news’, House of Commons Digital, Culture, Media and Sport Committee

Executive Summary

  • The potential damage produced by ‘fake news’ can be significant, and it should be treated as a genuine threat to the proper functioning of democratic societies;
  • The problem of social media users irresponsibly propagating falsehoods, without knowing these are falsehoods, and not making any effort to determine veracity, needs to be dealt with;
  • It does not take much ‘fake news’, under some circumstances, to wholly disrupt consensus forming in a group;
  • Regulatory or other mechanisms that might be introduced to disrupt, interdict or remove ‘fake news’from social media will confront serious challenges in robustly identifying what is or is not ‘fake news’;
  • The ‘fake news’ problem in mass media is similar in many ways to the quality assurance problems that bedeviled manufacturing industries decades ago. There is a potential payoff in establishing process based quality assurance standards for mass media, and mandating compliance with such standards;
  • A model that should be explored is the exploitation of media organisations with track records of bias free and high integrity news reporting to provide fact checking services to social media organisations;
  • The commonly proposed model of “inoculation”, in which users are taught to think critically and test evidence, may be difficult and expensive to implement, and lazy social media users may simply not bother;
  • In the long term, an educational system that rigorously teaches from the outset critical thinking, the ability to identify logical fallacies, and respect fortested fact over subjective opinion, would solve much of the ‘fake news’ problem as this would destroy much of the ‘market’ for ‘fake news’, but this does not address the near term problem we observe, and will require increased investment in education;
  • A demerit point system uniformly applied across social media platforms, where users are penalized for habitually propagating ‘fake news’is an alternative to “inoculation”, but confronts the same problems as fact checking – who determines what is or is not ‘fake news’, and is this free of errors and bias?
  • “silver bullet”solution to this problem may not exist, and strategies similar to those required to defeat biological contagions may be needed. These involve interdicting production and distribution, and “inoculation” of social media users.
  • There is potential for abuse, and potential for free speech to be impaired, where social media are subjected to censorship mechanisms, so any regulatory model has to be designed to prevent improper exploitation by any parties;

 

  1. The Committee’s inquiry dealing with Disinformation and ‘fake news’ is immensely important, and not only for the UK, as other nations will almost certainly consider the UK’s response to the findings of this inquiry in framing their measures for dealing with this pervasive problem.

 

  1. We are university computer science researchers with an interest in the problem of ‘fake news’ and potential measures to deal with this pandemic in the digital media. We have recently published some research that exposes both the sensitivity of a population to the effects of ‘fake news’, and the sensitivity of ‘fake news’to costs incurred. Our hope is that providing some further insight into the problem will be helpful to the Committee in forming its conclusions in this inquiry.

 

  1. We concur strongly with the Committee on a many key points presented in the interim report, HC363, of the 24th There can be little doubt that the dissemination of disinformation and misinformation through social media and mass media is a major risk to the functioning of parliamentary democracies. Existing legal frameworks, and arguably, widely accepted community standards of behavior in social media, are clearly not capable of coping with this problem. The impacts go well beyond what choices voters make during elections and referendums, as a population that is immersed in a deluge of misinformation and intentional disinformation will become confused or develop false beliefs about the world they live in. The term “alternative reality”could be applied.

 

  1. While the Orwellian model is frequently discussed in this context, the prospect of communities in a perpetual and universal state of confusion and chaos is a no less undesirable outcome. Universal false beliefs and frequently widespread confusion are a feature of totalitarian states past and present, and such outcomes are clearly incompatible with a functioning democratic system.

 

  1. Most academic research in the area of ‘fake news’ has been narrowly focused on specific problems, and much excellent work has been done on studying the mechanisms of how ‘fake news’is propagated and diffused in social media and mass media. Equally important empirical work has been done mining social media data and metadata, to establish the scale of the problem, and identify perpetrators.
  2. In our research we explored more fundamental mechanisms, aiming to answer two key questions:
    1. How sensitive is consensus forming (i.e. cooperation) in a population to the presence of ‘fake news’?; and
    2. How do costs incurred by producers and propagators of ‘fake news’ impact the effects of ‘fake news’in the population?
  3. The research and computer simulation methods we used, involving artificial agents who survive or die out depending on their behaviour and environment, are detailed in publications listed in the References at the end of this submission.

 

  1. In the real world, costs incurred by producers and propagators of ‘fake news’can be external (typically monetary), such as fines, penalties, exclusions, expenditures in creating and distributing fakes, or internal (i.e. social costs) to individuals, such as feelings of loss or embarrassment due to being ridiculed or shamed by peers.

 

  1. We found that even a remarkably small percentage of deceivers in a population, in our simulations less that 1%, could catastrophically disrupt cooperative behaviours in the simulated population.
  2. In the extreme case of cost-free deceptions – where ‘fake news’ producers in the simulated population are unhindered – cooperative behaviours vanished altogether. Put differently, almost everybody ended up deceiving everybody else.

 

  1. Only where the cost of deceptions was larger than zero, did cooperative behaviour survive in the population, and where costs were very high, actually thrived at the expense of deceptive behaviours.

 

  1. We also found that for all simulations, the ability of deceiving players to survive depended very strongly on the cost of deceptions. If the cost was high enough, deceivers could not survive in the population.

 

  1. Applying this result to the spreading of ‘fake news’ in social media, we can expect sufficiently high costs will lead to its extinction.

 

  1. These results are reflected in what has been empirically observed in social media, despite the fact that the simulation involved a population of very simple software robots playing a trivial social game, and employing trivial deceptions. What we found most remarkable is how closely such a simple simulation captured behaviors seen in the real world of social media.

 

  1. What are the implications of these results for dealing with the real world of ‘fake news’ distribution in social and mass media, as considered in this inquiry?

 

  1. The first and arguably most important implication is that very little ‘fake news’ is required to create a lot of mayhem in a population, and prevent consensus forming that is critical to public debates.

 

  1. Regardless of whether victims of ‘fake news’ deceptions are confused, or end up believing in falsehoods, their ability to reach consensus will have been disrupted. Moreover, voters who are confused and / or believe in falsehoods, are likely to be in a state of anxiety, and unlikely to make well considered choices at the ballot box.

 

  1. Our modeling was very specifically focused on small populations of influencers who actively debate issues. Where influencers cannot agree, followers (i.e. much of the general public in a political or other wider community debate) in turn cannot align themselves to a consensus position.

 

  1. This is one of the major reasons why the pandemic of ‘fake news’ has been and continues to be so destructive to democratic societies. It is also the reason why effective measures to deal with this problem must be found and deployed at the earliest opportunity.

 

  1. The second result of broad interest from the simulations is that attaching a high cost to the production, but especially the distribution, of ‘fake news’may prove to be the most effective tool available to defeat the ‘fake news’ pandemic.

 

  1. A key consideration is how to best disrupt or interdict the mass distribution of ‘fake news’.

 

  1. Information Warfare research published over a decade ago found that proxy delivery was a major “force multiplier” in the distribution of toxic propaganda – e.g., mass media distributing violent media content produced by terrorists were acting as proxies for the terrorists producing the propaganda, whether they knew it or not. This problem has been previously identified in this inquiry, and elsewhere.

 

  1. Unfortunately, social media users who share ‘fake news’ are likewise acting as proxies for the producers of ‘fake news’, multiplying the footprint of the ‘fake news’, especially if these social media users have large networks of followers.

 

  1. Such social media users are typically cast as victims of ‘fake news’, which they often are, but every time such social media users share ‘fake news’they also become active participants in the ‘fake news’producer’s deception, or put bluntly “proxy deceivers”.

 

  1. To what extent can social media users be expected to take responsibility for what they share, given that far too often they will have little or no understanding of what they are sharing, or its veracity?

 

  1. This is a perverse problem. Psychologists have identified the Dunning-Kruger Effect, whereby the ignorant become overly confident in their belief that they understand what they do not. Social media users who are the least equipped to identify misinformation or disinformation are thus the most likely to believe it and propagate it to other users.

 

  1. Attaching a cost to the distribution of ‘fake news’ in social media is not straightforward, despite a commonly propounded view otherwise.

 

  1. One approach is the informal “outing” of habitual posters of ‘fake news’, which accords well with the evolutionary psychology of cheater detection.

 

  1. Another approach often proposed is for social media organisations to be more proactive, and set up ‘fake news’detection units to identify and flag‘fake news’ posts accordingly. This will be expensive in personnel costs, as at this time Artificial Intelligence is not up to this task, contra claims by Facebook and others.

 

  1. But both of these approaches must confront the frequently challenging problem of determining exactly what is or is not ‘fake news’.

 

  1. As noted previously in this inquiry, unpalatable facts or truths are too often falsely labeled as ‘fake news’.

 

  1. Popularity is not a reliable guide to the truth, despite the popularity of the “argumentum ad populum”logical fallacy!

 

  1. The reliability and objectivity of fact checkers can vary widely, and can sometimes be very poor.

 

  1. Ground truths are often obscured by political, ideological or cultural bias, and and even more frequently, fact checker limitations in understanding.That we can now find on the Internet fact checking websites that rate or rank the objectivity or biases in other fact checking websites speaks for itself!

 

  1. Recent research studying influence operations conducted in social media indicates that the veracity of reports alone may not be a sufficient criterion when assessing damage potential. Jensen recently observed in a Canberra Joint Standing Committee on Electoral Matters hearing that reports which are “true in and of themselves or not demonstrably false”can be employed to stimulate anxiety and reinforce audience fears. This in many ways reflects the well known computer industry sales (mal-)practice of FUD or “Fear, Uncertainty and Doubt”.

 

  1. There is some evidence that complaint handling channels in some social media platforms have been prone to bias, reflecting much the same problem as seen with fact checking.

 

December 2018

References:

 

  1. Kopp, C, Korb, K.B, Mills, B.I., “Information-theoretic models of deception: modelling cooperation and diffusion in populations exposed to “fake news“”, PLOS ONEXX(YY): XXXXX, 2018, URI: http://journals.plos.org/plosone/article?id=10.1371/pone.0207383
  2. Kopp, C, Korb, K.B, “We made deceptive robots to see why fake news spreads, and found its weakness”, The Conversation, November, 2018, URI:https://theconversation.com/we-made-deceptive-robots-to-see-why-fake-news-spreads-and-found-a-weakness-104776
  3. Kopp, C, Korb, K.B, Mills, B.I., “Understanding the Inner Workings of “Fake News””Science Trends, November, 2018, URI: https://sciencetrends.com/understanding-the-inner-workings-of-fake-news/
  4. Kopp, C., “Understanding the Deception Pandemic”, Presentation Slides, Australian Skeptics Seminar, 16th July, 2018, Melbourne, Australia, URI:http://users.monash.edu/~ckopp/Presentations/ Understanding-The-Deception-Pandemic-2018-B.pdf
  5. Kopp C., “Considerations on deception techniques used in political and product marketing”, Proceedings of the 7th Australian Information Warfare and Security Conference, 4 December 2006 to 5 December 2006, School of Computer Information Science, Edith Cowan University, Perth WA Australia, pp. 62-71.
  6. Kopp C., “Classical Deception Techniques and Perception Management vs. the Four Strategies of Information Warfare”, in G Pye and M Warren (eds), Conference Proceedings of the 6th Australian Information Warfare & Security Conference (IWAR 2005), Geelong, VIC, Australia, School of Information Systems, Deakin University, Geelong, VIC, Australia, ISBN: 1 74156 028 4, pp 81-89.
  7. Kopp C., “The Analysis of Compound Information Warfare Strategies”, in G Pye and M Warren (eds), Conference Proceedings of the6th Australian Information Warfare & Security Conference (IWAR 2005), Geelong, VIC, Australia, School of Information Systems, Deakin University, Geelong, VIC, Australia, ISBN: 1 74156 028 4, pp 90-97.
  8. Kopp C., “Shannon, Hypergames and Information Warfare”, in W Hutchinson (ed), Proceedings of the 4th Australian Information Warfare & Security Conference 2003 (IWAR 2003). Perth WA Australia, 28 – 29 November 2003, Edith Cowan University, Churchlands WA Australia, ISBN: 0-7298-0524-7.
  9. Kopp C. and Mills B.I., “Information Warfare and Evolution”, in W Hutchinson (ed), Proceedings of the 3rd Australian Information Warfare & Security Conference 2002 (IWAR 2002). Perth WA Australia, 28 – 29 November 2002, Edith Cowan University, Churchlands WA Australia, ISBN: 0-7298-0524-7, pp 352-360.

 

 

Advertisements