This is the third in a technical series on deception, based on the paper by Drs. Carlo Kopp, Kevin B. Korb, and Bruce I; Mills, “Paper: Information-theoretic models of deception: Modelling cooperation and diffusion in populations exposed to “fake news”“.
[From Dr. Carlo Kopp]
I am very pleased to announce the publication of a new research paper, coauthored with Kevin Korb at Monash Uni and Bruce Mills at UWA.
We solved the long standing problem of how to incorporate deceptions into game theory models, providing for the first time a mathematically robust way to model the interaction of social systems with deceptions.
We then applied this model to the fake news problem. Surprising results were that only a tiny fraction of deceivers in a population could seriously disrupt cooperation, and that the cost of deceptions critically determines whether they can become established in a population.
The latter has important practical implications as increasing the costs to deceivers in social media and mass media will reduce the frequency of deceptions such as fake news.
Feel free to propagate these links widely.
Carlo Kopp and Kevin Korb, We made deceptive robots to see why fake news spreads, and found its weakness, The Conversation, Nov 2018, URI:
Carlo Kopp, Kevin Korb,and Bruce Mills, Understanding the Inner Workings of “Fake News”, Science Trends, Nov 2018, URI:
Carlo Kopp, Understanding the Deception Pandemic, Presentation Slides, Monash University, Nov 2018, URI:
|Paper and Abstract:
Carlo Kopp, Kevin Korb,and Bruce Mills, “Information-theoretic models of deception: Modelling cooperation and diffusion in populations exposed to “fake news””, PLOS ONE, URI: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0207383
The modelling of deceptions in game theory and decision theory has not been well studied, despite the increasing importance of this problem in social media, public discourse, and organisational management. This paper presents an improved formulation of the extant information-theoretic models of deceptions, a framework for incorporating these models of deception into game and decision theoretic models of deception, and applies these models and this framework in an agent based evolutionary simulation that models two very common deception types employed in “fake news” attacks. The simulation results for both deception types modelled show, as observed empirically in many social systems subjected to “fake news” attacks, that even a very small population of deceivers that transiently invades a much larger population of non-deceiving agents can strongly alter the equilibrium behaviour of the population in favour of agents playing an always defect strategy. The results also show that the ability of a population of deceivers to establish itself or remain present in a population is highly sensitive to the cost of the deception, as this cost reduces the fitness of deceiving agents when competing against non-deceiving agents. Diffusion behaviours observed for agents exploiting the deception producing false beliefs are very close to empirically observed behaviours in social media, when fitted to epidemiological models. We thus demonstrate, using the improved formulation of the information-theoretic models of deception, that agent based evolutionary simulations employing the Iterated Prisoner’s Dilemma can accurately capture the behaviours of a population subject to deception attacks introducing uncertainty and false perceptions, and show that information-theoretic models of deception have practical applications beyond trivial taxonomical analysis.