The following article takes you, the reader, to the edge of the precipice and allows you to peer over the edge, but only slightly. Please allow me to show you more. A peek into hell itself, in the information realm.
Says Sean Gourley, founder, and CEO of Primer, a company that uses software to mine data sources and automatically generate reports for US intelligence agencies via In-Q-Tel, the intelligence community’s investment fund,
“The automation of the generation of fake news is going to make it very effective.”
In an article in BG Falcon media, Joe Keohane tells us how Heliograf and other artificial intelligence applications are writing data-heavy stories about sports and election results,
After some tweaking, editors wrote key phrases for potential election results, like “Republicans retained control of the House” and “Democrats regained control of the House,” plugged Heliograf into election data from VoteSmart.org and set the AI loose on the election.
Heliograf can choose phrases from the template, fill in data and publish multiple articles across different platforms. If there are anomalies in the data, such as wider margins than expected, Heliograf can alert reporters.
In A New AI “Journalist” Is Rewriting the News to Remove Bias, Kristin Houser explains how news stories can be rewritten, in as little as 60 seconds, to remove bias, witting or unwittingly included. Now, imagine, the opposite is true. Take a normal story, written to professional standards, and tweak it to your desired perspective. It should be obvious how this may be applied to information warfare programs in Russia and China.
Alexander Titkov, in Artificial Intelligence made in Russia, describes the concerted effort that has already been made in Russia, to take the lead in Artificial Intelligence.
According to SAP analysts, almost 1,500 AI research projects in Russia have received financial support from the state and private sector over the past 10 years, with more than half of the projects paid for by the state or implemented as part of federal targeted programs. For example, the Global Competitiveness Improvement Project for Leading Russian Universities, 5-100, has brought together the strongest Russian universities so that they could work in advanced research fields with the support of the Ministry of Education and Science. The creation of artificial intelligence is one such field.
James Vincent, in Putin says the nation that leads in AI ‘will be the ruler of the world’, shows Putin’s intense interest in taking the lead in AI. Having heavily invested in their world-class Information Warfare program, we can be most certain AI will be introduced, if it has not been already.
In the same article, Vincent points out that China is doing the same,
In July, China’s State Council released a detailed strategy designed to make the country “the front-runner and global innovation center in AI” by 2030. It includes pledges to invest in R&D that will “through AI elevate national defense strength and assure and protect national security.”
The potential for AI to be used in propaganda, disinformation, misinformation and fake news by both Russia and China is too great for them to ignore. Currently generated AI stories are already virtually indistinguishable from human-written news stories and are churned out in seconds. Russia will use it to steamroller and hammer everything in its way, while China will engage us much more gently, with finesse and poise, but with the same purpose – to win our hearts and minds and gain our vote. We already know that Russia would use it to promote Russia at the expense of the West.
I expect to see AI used in generating fake news stories containing propaganda, disinformation, and misinformation emerge in the US 2018 midterm elections and will only grow.
The United States and Europe are ill-prepared for the coming wave of “deep fakes” that artificial intelligence could unleash.
Russian disinformation has become a problem for European governments. In the last two years, Kremlin-backed campaigns have spread false stories alleging that French President Emmanuel Macron was backed by the “gay lobby,” fabricated a story of a Russian-German girl raped by Arab migrants, and spread a litany of conspiracy theories about the Catalan independence referendum, among other efforts.
Europe is finally taking action. In January, Germany’s Network Enforcement Act came into effect. Designed to limit hate speech and fake news online, the law prompted both France and Spain to consider counterdisinformation legislation of their own. More important, in April the European Union unveiled a new strategy for tackling online disinformation. The EU plan focuses on several sensible responses: promoting media literacy, funding a third-party fact-checking service, and pushing Facebook and others to highlight news from credible media outlets, among others. Although the plan itself stops short of regulation, EU officials have not been shy about hinting that regulation may be forthcoming. Indeed, when Facebook CEO Mark Zuckerberg appeared at an EU hearing this week, lawmakers reminded him of their regulatory power after he appeared to dodge their questions on fake news and extremist content.
The recent European actions are important first steps. Ultimately, none of the laws or strategies that have been unveiled so far will be enough. The problem is that technology advances far more quickly than government policies.
The problem is that technology advances far more quickly than government policies.
The EU’s measures are still designed to target the disinformation of yesterday rather than that of tomorrow.
To get ahead of the problem, policymakers in Europe and the United States should focus on the coming wave of disruptive technologies. Fueled by advances in artificial intelligence and decentralized computing, the next generation of disinformation promises to be even more sophisticated and difficult to detect.
To craft effective strategies for the near term, lawmakers should focus on four emerging threats in particular: the democratization of artificial intelligence, the evolution of social networks, the rise of decentralized applications, and the “back end” of disinformation.
Thanks to bigger data, better algorithms, and custom hardware, in the coming years, individuals around the world will increasingly have access to cutting-edge artificial intelligence. From health care to transportation, the democratization of AI holds enormous promise.
Yet as with any dual-use technology, the proliferation of AI also poses significant risks. Among other concerns, it promises to democratize the creation of fake print, audio, and video stories. Although computers have long allowed for the manipulation of digital content, in the past that manipulation has almost always been detectable
Although computers have long allowed for the manipulation of digital content, in the past that manipulation has almost always been detectable
: A fake image would fail to account for subtle shifts in lighting, or a doctored speech would fail to adequately capture cadence and tone. However, deep learning and generative adversarial networks have made it possible to doctor images and video so well that it’s difficult to distinguish manipulated files from authentic ones. And thanks to apps like FakeApp and Lyrebird, these so-called “deep fakes” can now be produced by anyone with a computer or smartphone. Earlier this year, a tool that allowed users to easily swap faces in video produced fake celebrity porn, which went viral on Twitter and Pornhub.
Deep fakes and the democratization of disinformation will prove challenging for governments and civil society to counter effectively. Because the algorithms that generate the fakes continuously learn how to more effectively replicate the appearance of reality, deep fakes cannot easily be detected by other algorithms — indeed, in the case of generative adversarial networks, the algorithm works by getting really good at fooling itself. To address the democratization of disinformation, governments, civil society, and the technology sector therefore cannot rely on algorithms alone, but will instead need to invest in new models of social verification, too.
At the same time as artificial technology and other emerging technologies mature, legacy platforms will continue to play an outsized role in the production and dissemination of information online. For instance, consider the current proliferation of disinformation on Google, Facebook, and Twitter.
A growing cottage industry of search engine optimization (SEO) manipulation provides services to clients looking to rise in the Google rankings. And while for the most part, Google is able to stay ahead of attempts to manipulate its algorithms through continuous tweaks, SEO manipulators are also becoming increasingly savvy at gaming the system so that the desired content, including disinformation, appears at the top of search results.