Disinformation: An epistemology for the digital age

Our epistemologists are used to ask some very general questions: What is knowledge? What is the relation between observation and theory? How do we justify claims to knowledge? Digital technology is changing how information is collected, organized, and disseminated. The proliferation of claims to knowledge in the Internet highlights moreover the issue of error, mis- and disinformation, of “Fake News,” of its distribution and the question how to disarm it. We need an epistemology for the digital age that looks at knowledge not only in the usual timeless fashion but takes into account the changing landscape of human knowledge.

Here is part of a report on “Artificial Intelligence and International Security” that addresses some of the issues that an epistemology for the digital age needs to consider.

Artificial Intelligence and
International Security
By Michael C. Horowitz, Gregory C. Allen,
Edoardo Saravalle, Anthony Cho,
Kara Frederick, and Paul Scharre

Center for a New American Security
July 2018

Information Security

The role of AI in the shifting threat landscape has serious implications for information security, reflecting the broader impact of AI, through bots and related systems in the information age. AI’s use can both exacerbate and mitigate the effects of disinformation within an evolving information ecosystem. Similar to the role of AI in cyber attacks, AI provides mechanisms to narrowly tailor propaganda to a targeted audience, as well as increase its dissemination at scale – heightening its efficacy and reach. Alternatively, natural language understanding and other forms of machine learning can train computer models to detect and filter propaganda content and its amplifiers. Yet too often the ability to create and spread disinformation outpaces AI-driven tools that detect it.

Targeted Propaganda and Deep Fakes

Computational propaganda inordinately affects the current information ecosystem and its distinct vulnerabilities. This ecosystem is characterized by social media’s low barriers to entry, which allow anonymous actors – sometimes automated – to spread false, misleading or hyper-partisan content with little accountability. Bots that amplify this content at scale, tailored messaging or ads that enforce existing biases, and algorithms that promote incendiary content to encourage clicks point to implicit vulnerabilities in this landscape.9 MIT researchers’ 2018 finding that “falsehood [diffuses] significantly farther, faster, deeper and more broadly” than truth on Twitter, especially regarding political news, further illustrates the risks of a crowded information environment.10 AI is playing an increasingly relevant role in the information ecosystem by enabling propaganda to be more efficient, scalable, and widespread.11 A sample of AI-driven techniques and principles to target and distribute propaganda and disinformation includes:

• Exploitation of behavioral data – The application of AI to target specific audiences builds on behavioral data collection, with machine learning parsing through an increasing amount of data. Metadata generated by users of online platforms – often to paint a picture of consumer behavior for targeted advertising – can be exploited for propaganda purposes as well.12 For instance, Cambridge Analytica’s “psychographic” micro-targeting based off of Facebook data used online footprints and personality assessments to tailor messages and content to individual users.13

• Pattern recognition and prediction – AI systems’ ability to recognize patterns and calculate the probability of future events, when applied to human behavior analysis, can reinforce echo chambers and confirmation bias.14 Machine learning algorithms on social media platforms prioritize content that users are already expected to favor and produce messages targeted at those already susceptible to them.15

• Amplification and agenda setting – Studies indicate that bots made up over 50 percent of all online traffic in 2016.16 Entities that artificially promote content can manipulate the “agenda setting” principle, which dictates that the more often people see certain content, the more they think it is important.17 Amplification can increase the perception of significance in the public mind. Further, if political bots are “written to learn from and mimic real people,” according to computational propaganda researchers Samuel Woolley and Philip Howard, then they stand to influence the debate. For example, Woolley and Howard point toward the deployment of political bots that interact with users and attack political candidates, weigh in on activists’ behavior, inflate candidates’ follower numbers, or retweet specific candidates’ messaging, as if they were humans.18 Amplifying damaging or distracting stories about a political candidate via “troll farms” can also change what information reaches the public. This can affect political discussions, especially when coupled with anonymity that reduces attribution (and therefore accountability) to imitate legitimate human discourse.19

• Natural language processing to target sentiment – Advances in natural language processing can leverage sentiment analysis to target specific ideological audiences.”20 Google’s offer of political interest ad targeting for both “left-leaning” and “right-leaning” users for the first time in 2016 is a step in this direction.21 By using a systemic method to identify, examine, and interpret emotional content within text, natural language processing can be wielded as a propaganda tool. Clarifying semantic interpretations of language for machines to act upon can aid in the construct of more emotionally relevant propaganda.22 Further, quantifying user reactions by gathering impressions can refine this propaganda by assessing and recalibrating methodologies for maximum impact. Private sector companies are already attempting to quantify this behavior tracking data in order to vector future microtargeting efforts for advertisers on their platforms. These efforts are inherently dual-use – instead of utilizing metadata to supply users with targeted ads, malicious actors can supply them with tailored propaganda instead.

• Deep fakes – AI systems are capable of generating realistic-sounding synthetic voice recordings of any individual for whom there is a sufficiently large voice training dataset.23 The same is increasingly true for video.24 As of this writing, “deep fake” forged audio and video looks and sounds noticeably wrong even to untrained individuals. However, at the pace these technologies are making progress, they are likely less than five years away from being able to fool the untrained ear and eye.

Countering Disinformation

While no technical solution will fully counter the impact of disinformation on international security, AI can help mitigate its efficiency. AI tools to detect, analyze, and disrupt disinformation weed out nefarious content and block bots. Some AI-focused mitigation tools and examples include:

• Automated Vetting and Fake News Detection – Companies are partnering with and creating discrete organizations with the specific goal of increasing the ability to filter out fake news and reinforce known facts using AI. In 2017, Google announced a new partnership with the International Fact-Checking Network at The Poynter Institute, and MIT’s the Fake News Challenge resulted in an algorithm with an 80 percent success rate.25 Entities like AdVerif.ai scan and detect “problematic” content by augmenting manual review with natural language processing and deep learning.26 Natural language understanding to train machines to find nefarious content using semantic text analysis could also improve these initiatives, especially in the private sector.

• Trollbot Detection and Blocking – Estimates indicate the bot population ranges between 9 percent and 15 percent on Twitter and is increasing in sophistication. Machine learning models like the Botometer API, a feature-based classification system for Twitter, offer an AI-driven approach to identify them for potential removal.27 Reducing the amount of bots would de-clutter the information ecosystem, as some political bots are created solely to amplify disinformation, propaganda, and “fake news.”28 Additionally, eliminating specific bots would reduce their malign uses, such as for distributed denial-of-service attacks, like those propagated by impersonator bots throughout 2016.29

• Verification of Authenticity – Digital distributed ledgers and machine speed sensor fusion to certify real-time information and authenticity of images and videos can also help weed out doctored data. Additionally, blockchain technologies are being utilized at non-profits like PUBLIQ, which encrypts each story and distributes it over a peer-to-peer network to attempt to increase information reliability.30 Content filtering often requires judgement calls due to varying perceptions of truth and the reliability of information. Thus, it is difficult to create a universal filter based on purely technical means, and it is essential to keep a human in the loop during AI-driven content identification. Technical tools can limit and slow disinformation, not eradicate it.

References

9 Zeynep Tufekci, “YouTube, The Great Radicalizer,” The New York Times, March 10, 2018,
https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html.
10 Soroush Vosoughi, Deb Roy and Sinan Aral, “The spread of true and false news online,”
Science Magazine, 359 no. 6380 (March 9, 2018), 1146-1151.
11 Miles Brundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention
and Mitigation,” (University of Oxford, February 2018), 16, https://maliciousaireport.com/.
12 Tim Hwang, “Digital Disinformation: A Primer,” The Atlantic Council, September 25, 2017, 7,

Digital Disinformation: A Primer


13 Toomas Hendrik Ilves, “Guest Post: Is Social Media Good or Bad for Democracy?”, Facebook
Newsroom, January 25, 2018, https://newsroom.fb.com/news/2018/01/ilves-democracy/; and
Sue Halpern, “Cambridge Analytica, Facebook and the Revelations of Open Secrets,” The New
Yorker, March 21, 2018, https://www.newyorker.com/news/news-desk/cambridge-analyticafacebook-and-the-revelations-of-open-secrets.
14 Michael W. Bader, “Reign of the Algorithms: How “Artificial Intelligence” is Threatening Our
Freedom,” May 12, 2016, https://www.gfe-media.de/blog/wpcontent/
uploads/2016/05/Herrschaft_der_Algorithmen_V08_22_06_16_EN-mb04.pdf.
15 Brundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention and
Mitigation,” 46.
16 Igal Zeifman, “Bot Traffic Report 2016,” Imperva Incapsula blog on Incapsula.com, January
24, 2017, https://www.incapsula.com/blog/bot-traffic-report-2016.html.
17 Samuel C. Woolley and Douglas R. Guilbeault, “Computational Propaganda in the United
States of America: Manufacturing Consensus Online,” Working paper (University of Oxford,
2017), 4, http://blogs.oii.ox.ac.uk/politicalbots/wp-content/uploads/sites/89/2017/06/Comprop-
USA.pdf; and Samuel C. Woolley and Phillip N. Howard, “Political Communication,
Computational Propaganda, and Autonomous Agents,” International Journal of Communication,
10 (2016), 4885, http://ijoc.org/index.php/ijoc/article/download/6298/1809.
18 Woolley and Howard, “Political Communication, Computational Propaganda, and
Autonomous Agents,” 4885.
19 Alessandro Bessi and Emilio Ferrara, “Social bots distort the 2016 U.S. presidential election
online discussion,” First Monday, 21 no. 11 (November 2016), 1.
20 Travis Morris, “Extracting and Networking Emotions in Extremist Propaganda,” (paper
presented at the annual meeting for the European Intelligence and Security Informatics
Conference, Odense, Denmark, August 22-24, 2012), 53-59.
21 Kent Walker and Richard Salgado, “Security and disinformation in the U.S. 2016 election:
What we found,” Google blog, October 30, 2017, https://storage.googleapis.com/gweb-uniblogpublish-
prod/documents/google_US2016election_findings_1_zm64A1G.pdf.
22 Morris, “Extracting and Networking Emotions in Extremist Propaganda,” 53-59.
23 Craig Stewart, “Adobe prototypes ‘Photoshop for audio,’” Creative Bloq, November 03, 2016,
http://www.creativebloq.com/news/adobe-prototypes-photoshop-for-audio.
24 Justus Thies et al., “Face2Face: Real-time Face Capture and Reenactment of RGB Videos,”
Niessner Lab, 2016, http://niessnerlab.org/papers/2016/1facetoface/thies2016face.pdf.
25 Erica Anderson, “Building trust online by partnering with the International Fact Checking
Network,” Google’s The Keyword blog, October 26, 2017,
https://www.blog.google/topics/journalism-news/building-trust-online-partnering-internationalfact-
checking-network/; and Jackie Snow, “Can AI Win the War Against Fake News?” MIT
Technology Review, December 13, 2017, https://www.technologyreview.com/s/609717/can-aiwin-the-war-against-fake-news/.
26 “Technology,” AdVerif.ai, http://adverifai.com/technology/.
27 Onur Varol et al., “Online Human-Bot Interactions: Detection, Estimation and
Characterization,” Preprint, submitted March 27, 2017, 1, https://arxiv.org/abs/1703.03107.
28 Lee Rainie, Janna Anderson, and Jonathan Albright, “The Future of Free Speech, Trolls,
Anonymity, and Fake News Online,” (Pew Research Center, March 2017),
http://www.pewinternet.org/2017/03/29/the-future-of-free-speech-trolls-anonymity-and-fakenews-online/.; and Alejandro Bessi and Emilio Ferr, “Social bots distort the 2016 U.S.
Presidential election online discussion,” First Monday, 21 no. 11 (November 2016),
http://firstmonday.org/ojs/index.php/fm/article/view/7090/5653.
29 Adrienne Lafrance, “The Internet is Mostly Bots,” The Atlantic, January 31, 2017,
https://www.theatlantic.com/technology/archive/2017/01/bots-bots-bots/515043/.
30 “PUBLIQ goes public: The blockchain and AI company that fights fake news announces the
start of its initial token offering,” PUBLIQ, November 14, 2017,
https://publiq.network/en/7379D8K2.

Comments are closed.