Wittgenstein on the Puzzle of Privacy

“In what sense are my sensations private? – Well, only I know whether I am really in pain; another person can only surmise it. – In one way this is wrong, and in another nonsense. If we are using the word ‘to know’ as it is normally used (and how else are we to use it?), then other people often know when I am in pain. – Yes, but all the same not with the certainty with which I know it myself! – It can’t be said of me at all (except perhaps as a joke) that I know I am in pain. What is it supposed to mean – except perhaps that I am in pain.”

Readers of Wittgenstein’s Philosophical Investigations will be familiar with this intriguing passage (PI, 246). But there are reasons for being dissatisfied with it. Wittgenstein’s argument appears effective against those who postulate an absolute division between body and mind. But this hardly exhausts what we need to say on the topic of the privacy of sensations, feelings, experiences, memories, thoughts, etc. Yes, it is true that people often know when another person is in pain. But even more often they don’t. We might grant Wittgenstein that sensations, feelings, etc. are not in principle private, but practically they often are. And that they are, we might add, is practically inevitable. Much of what goes on in us never sees daylight. Do I tell people all my dreams and if I do, will I succeed in communicating to them what made them disturbing or funny? Do others know what I feel and think when I shave myself in the morning in front of the mirror? Do I speak of the twinge in my ankle as I walk to work? Do I communicate all the associations and memories that some words in a book evoke in me? None of this happens and others could not even in principle come to know all these things about me. Nor do I or could I know all that they think and feel and experience in their lives.

Hence, the disconnect that always exists between us. As a result, there is an element of uncertainty in all our social relations. There is the reality of misunderstanding, also of coldness and cruelty. Yes, Wittgenstein is right when he writes that it is possible to see that another person is in pain. In such cases, there is no question of a laborious inference from the person’s behavior to his feeling. The pain is manifest in the sufferer. But Wittgenstein passes over the fact that we do just as often not see that pain. Or we may see it in someone close to us and fail to notice it in a stranger. This disconnect produces frictions in our relations with each other. It leads to breakdowns of friendships and marriages. At the level of politics, it produces hostility and war. Over the course of human evolution, our inner life has, no doubt, become increasingly richer and therefore more difficult to discern for others. At the same time, we have learned to expand the range of our words and the vocabulary of our gestures so as to be able to communicate more effectively with each other. The inner life and the external expression bear, moreover, reciprocally on each other. My most private thoughts are, no doubt, shaped by the words I have learned from others and the words I use are imbued with feeling. The boundary between the inner and the outer is thus blurred. That is something, Wittgenstein establishes, no doubt, in his reflections on privacy. But there still exists a boundary and that it exists gives shape to our social practices and our political institutions. Privacy is a political issue precisely because it is a practical fact and that is something Wittgenstein has failed to notice.

Disinformation: An Epistemology for the Digital Age

Here is part of a report on “Artificial Intelligence and International Security” that addresses some of the issues that an epistemology for the digital age needs to consider.

Artificial Intelligence and
International Security
By Michael C. Horowitz, Gregory C. Allen,
Edoardo Saravalle, Anthony Cho,
Kara Frederick, and Paul Scharre

Center for a New American Security
July 2018

Information Security

The role of AI in the shifting threat landscape has serious implications for information security, reflecting the broader impact of AI, through bots and related systems in the information age. AI’s use can both exacerbate and mitigate the effects of disinformation within an evolving information ecosystem. Similar to the role of AI in cyber attacks, AI provides mechanisms to narrowly tailor propaganda to a targeted audience, as well as increase its dissemination at scale – heightening its efficacy and reach. Alternatively, natural language understanding and other forms of machine learning can train computer models to detect and filter propaganda content and its amplifiers. Yet too often the ability to create and spread disinformation outpaces AI-driven tools that detect it.

Targeted Propaganda and Deep Fakes

Computational propaganda inordinately affects the current information ecosystem and its distinct vulnerabilities. This ecosystem is characterized by social media’s low barriers to entry, which allow anonymous actors – sometimes automated – to spread false, misleading or hyper-partisan content with little accountability. Bots that amplify this content at scale, tailored messaging or ads that enforce existing biases, and algorithms that promote incendiary content to encourage clicks point to implicit vulnerabilities in this landscape.9 MIT researchers’ 2018 finding that “falsehood [diffuses] significantly farther, faster, deeper and more broadly” than truth on Twitter, especially regarding political news, further illustrates the risks of a crowded information environment.10 AI is playing an increasingly relevant role in the information ecosystem by enabling propaganda to be more efficient, scalable, and widespread.11 A sample of AI-driven techniques and principles to target and distribute propaganda and disinformation includes:

• Exploitation of behavioral data – The application of AI to target specific audiences builds on behavioral data collection, with machine learning parsing through an increasing amount of data. Metadata generated by users of online platforms – often to paint a picture of consumer behavior for targeted advertising – can be exploited for propaganda purposes as well.12 For instance, Cambridge Analytica’s “psychographic” micro-targeting based off of Facebook data used online footprints and personality assessments to tailor messages and content to individual users.13

• Pattern recognition and prediction – AI systems’ ability to recognize patterns and calculate the probability of future events, when applied to human behavior analysis, can reinforce echo chambers and confirmation bias.14 Machine learning algorithms on social media platforms prioritize content that users are already expected to favor and produce messages targeted at those already susceptible to them.15

• Amplification and agenda setting – Studies indicate that bots made up over 50 percent of all online traffic in 2016.16 Entities that artificially promote content can manipulate the “agenda setting” principle, which dictates that the more often people see certain content, the more they think it is important.17 Amplification can increase the perception of significance in the public mind. Further, if political bots are “written to learn from and mimic real people,” according to computational propaganda researchers Samuel Woolley and Philip Howard, then they stand to influence the debate. For example, Woolley and Howard point toward the deployment of political bots that interact with users and attack political candidates, weigh in on activists’ behavior, inflate candidates’ follower numbers, or retweet specific candidates’ messaging, as if they were humans.18 Amplifying damaging or distracting stories about a political candidate via “troll farms” can also change what information reaches the public. This can affect political discussions, especially when coupled with anonymity that reduces attribution (and therefore accountability) to imitate legitimate human discourse.19

• Natural language processing to target sentiment – Advances in natural language processing can leverage sentiment analysis to target specific ideological audiences.”20 Google’s offer of political interest ad targeting for both “left-leaning” and “right-leaning” users for the first time in 2016 is a step in this direction.21 By using a systemic method to identify, examine, and interpret emotional content within text, natural language processing can be wielded as a propaganda tool. Clarifying semantic interpretations of language for machines to act upon can aid in the construct of more emotionally relevant propaganda.22 Further, quantifying user reactions by gathering impressions can refine this propaganda by assessing and recalibrating methodologies for maximum impact. Private sector companies are already attempting to quantify this behavior tracking data in order to vector future microtargeting efforts for advertisers on their platforms. These efforts are inherently dual-use – instead of utilizing metadata to supply users with targeted ads, malicious actors can supply them with tailored propaganda instead.

• Deep fakes – AI systems are capable of generating realistic-sounding synthetic voice recordings of any individual for whom there is a sufficiently large voice training dataset.23 The same is increasingly true for video.24 As of this writing, “deep fake” forged audio and video looks and sounds noticeably wrong even to untrained individuals. However, at the pace these technologies are making progress, they are likely less than five years away from being able to fool the untrained ear and eye.

Countering Disinformation

While no technical solution will fully counter the impact of disinformation on international security, AI can help mitigate its efficiency. AI tools to detect, analyze, and disrupt disinformation weed out nefarious content and block bots. Some AI-focused mitigation tools and examples include:

• Automated Vetting and Fake News Detection – Companies are partnering with and creating discrete organizations with the specific goal of increasing the ability to filter out fake news and reinforce known facts using AI. In 2017, Google announced a new partnership with the International Fact-Checking Network at The Poynter Institute, and MIT’s the Fake News Challenge resulted in an algorithm with an 80 percent success rate.25 Entities like AdVerif.ai scan and detect “problematic” content by augmenting manual review with natural language processing and deep learning.26 Natural language understanding to train machines to find nefarious content using semantic text analysis could also improve these initiatives, especially in the private sector.

• Trollbot Detection and Blocking – Estimates indicate the bot population ranges between 9 percent and 15 percent on Twitter and is increasing in sophistication. Machine learning models like the Botometer API, a feature-based classification system for Twitter, offer an AI-driven approach to identify them for potential removal.27 Reducing the amount of bots would de-clutter the information ecosystem, as some political bots are created solely to amplify disinformation, propaganda, and “fake news.”28 Additionally, eliminating specific bots would reduce their malign uses, such as for distributed denial-of-service attacks, like those propagated by impersonator bots throughout 2016.29

• Verification of Authenticity – Digital distributed ledgers and machine speed sensor fusion to certify real-time information and authenticity of images and videos can also help weed out doctored data. Additionally, blockchain technologies are being utilized at non-profits like PUBLIQ, which encrypts each story and distributes it over a peer-to-peer network to attempt to increase information reliability.30 Content filtering often requires judgement calls due to varying perceptions of truth and the reliability of information. Thus, it is difficult to create a universal filter based on purely technical means, and it is essential to keep a human in the loop during AI-driven content identification. Technical tools can limit and slow disinformation, not eradicate it.

References

9 Zeynep Tufekci, “YouTube, The Great Radicalizer,” The New York Times, March 10, 2018,
https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html.
10 Soroush Vosoughi, Deb Roy and Sinan Aral, “The spread of true and false news online,”
Science Magazine, 359 no. 6380 (March 9, 2018), 1146-1151.
11 Miles Brundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention
and Mitigation,” (University of Oxford, February 2018), 16, https://maliciousaireport.com/.
12 Tim Hwang, “Digital Disinformation: A Primer,” The Atlantic Council, September 25, 2017, 7,

Digital Disinformation: A Primer


13 Toomas Hendrik Ilves, “Guest Post: Is Social Media Good or Bad for Democracy?”, Facebook
Newsroom, January 25, 2018, https://newsroom.fb.com/news/2018/01/ilves-democracy/; and
Sue Halpern, “Cambridge Analytica, Facebook and the Revelations of Open Secrets,” The New
Yorker, March 21, 2018, https://www.newyorker.com/news/news-desk/cambridge-analyticafacebook-and-the-revelations-of-open-secrets.
14 Michael W. Bader, “Reign of the Algorithms: How “Artificial Intelligence” is Threatening Our
Freedom,” May 12, 2016, https://www.gfe-media.de/blog/wpcontent/
uploads/2016/05/Herrschaft_der_Algorithmen_V08_22_06_16_EN-mb04.pdf.
15 Brundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention and
Mitigation,” 46.
16 Igal Zeifman, “Bot Traffic Report 2016,” Imperva Incapsula blog on Incapsula.com, January
24, 2017, https://www.incapsula.com/blog/bot-traffic-report-2016.html.
17 Samuel C. Woolley and Douglas R. Guilbeault, “Computational Propaganda in the United
States of America: Manufacturing Consensus Online,” Working paper (University of Oxford,
2017), 4, http://blogs.oii.ox.ac.uk/politicalbots/wp-content/uploads/sites/89/2017/06/Comprop-
USA.pdf; and Samuel C. Woolley and Phillip N. Howard, “Political Communication,
Computational Propaganda, and Autonomous Agents,” International Journal of Communication,
10 (2016), 4885, http://ijoc.org/index.php/ijoc/article/download/6298/1809.
18 Woolley and Howard, “Political Communication, Computational Propaganda, and
Autonomous Agents,” 4885.
19 Alessandro Bessi and Emilio Ferrara, “Social bots distort the 2016 U.S. presidential election
online discussion,” First Monday, 21 no. 11 (November 2016), 1.
20 Travis Morris, “Extracting and Networking Emotions in Extremist Propaganda,” (paper
presented at the annual meeting for the European Intelligence and Security Informatics
Conference, Odense, Denmark, August 22-24, 2012), 53-59.
21 Kent Walker and Richard Salgado, “Security and disinformation in the U.S. 2016 election:
What we found,” Google blog, October 30, 2017, https://storage.googleapis.com/gweb-uniblogpublish-
prod/documents/google_US2016election_findings_1_zm64A1G.pdf.
22 Morris, “Extracting and Networking Emotions in Extremist Propaganda,” 53-59.
23 Craig Stewart, “Adobe prototypes ‘Photoshop for audio,’” Creative Bloq, November 03, 2016,
http://www.creativebloq.com/news/adobe-prototypes-photoshop-for-audio.
24 Justus Thies et al., “Face2Face: Real-time Face Capture and Reenactment of RGB Videos,”
Niessner Lab, 2016, http://niessnerlab.org/papers/2016/1facetoface/thies2016face.pdf.
25 Erica Anderson, “Building trust online by partnering with the International Fact Checking
Network,” Google’s The Keyword blog, October 26, 2017,
https://www.blog.google/topics/journalism-news/building-trust-online-partnering-internationalfact-
checking-network/; and Jackie Snow, “Can AI Win the War Against Fake News?” MIT
Technology Review, December 13, 2017, https://www.technologyreview.com/s/609717/can-aiwin-the-war-against-fake-news/.
26 “Technology,” AdVerif.ai, http://adverifai.com/technology/.
27 Onur Varol et al., “Online Human-Bot Interactions: Detection, Estimation and
Characterization,” Preprint, submitted March 27, 2017, 1, https://arxiv.org/abs/1703.03107.
28 Lee Rainie, Janna Anderson, and Jonathan Albright, “The Future of Free Speech, Trolls,
Anonymity, and Fake News Online,” (Pew Research Center, March 2017),
http://www.pewinternet.org/2017/03/29/the-future-of-free-speech-trolls-anonymity-and-fakenews-online/.; and Alejandro Bessi and Emilio Ferr, “Social bots distort the 2016 U.S.
Presidential election online discussion,” First Monday, 21 no. 11 (November 2016),
http://firstmonday.org/ojs/index.php/fm/article/view/7090/5653.
29 Adrienne Lafrance, “The Internet is Mostly Bots,” The Atlantic, January 31, 2017,
https://www.theatlantic.com/technology/archive/2017/01/bots-bots-bots/515043/.
30 “PUBLIQ goes public: The blockchain and AI company that fights fake news announces the
start of its initial token offering,” PUBLIQ, November 14, 2017,
https://publiq.network/en/7379D8K2.

Politics as a field of imperfect cognitive states

Our epistemologists have been thinking about knowledge for a long time and about how to define it. The standard view is that knowledge is justified true belief; but that hardly settles the matter since all three terms – justification, truth, and belief – are in need of further clarification. When it comes to the question where knowledge is to be found, we have tended to look at mathematics, or physics, or at cases where an object is clearly perceived under ideal conditions.

But in social and political life we are rarely dealing with knowledge in this sense. In these domains we encounter conjectures, surmises, guesswork, “convictions,” presumption, suspicion, interpretations, attempts to make sense, etc. I am particularly interested in states of uncertainty and disorientation because these seem to prevail now in our politics.

I have argued in Politics and the Search for the Common Good that politics is inherently a domain of uncertainty. Uncertainty affects all aspects of political life and brings about its characteristic volatility. Disorientation, on the other hand, is a malady that disrupts politics and can destroy political institutions. But the two are connected and for this reason we will need to look at them and their interrelation. We are uncertain when we don’t know (don’t know for sure) what has been, what is, or what will be. The difficulty we have in separating news from “fake news,” information from misinformation exemplifies this condition. We are disoriented, on the other hand, when we don’t understand what has been, what is, or what will be because we lack adequate words and concepts to do so. Our inability to analyze our current condition, to say what kind of political transformation we are experiencing and what might come after may count as an illustration. Though similar in some respects and interrelated as they are, uncertainty and disorientation belong, nonetheless, to different cognitive registers: one concerns our knowledge, the other our understanding.

We need to distinguish, however, between uncertainty and the feeling of uncertainty and likewise between disorientation and the feeling of being disoriented. The two are easily confused. The feeling is something that may or may not attach itself to an actual state of uncertainty or an actual condition of disorientation. But it is a secondary (and second-level) psychological state that relates to a primary (first-level) cognitive condition. We may be objectively uncertain about what is to come but feel confident that we know. In other words, we think we know when we really don’t. It is then said that we suffer from a sense of false certainty. False certainty is a common feature of political life and it goes hand in hand with its indubitable uncertainties. In his book Fire and Fury Michael Wolff writes that Donald Trump’s White House staff and members of his cabinet had become aware after a few months of “the baldly obvious fact that the president did not know enough, did not know what he didn’t know, did not particularly care, and, to boot, was confident if not serene in his unquestioned certitudes.” What holds for uncertainty, applies also to disorientation. We may feel that we understand what is going on, when this is, in fact not so. Disorientation is, in this respect, like dementia. Disoriented as we are we may still believe, just like the demented, that we are doing fine, are of clear mind, grasp what is going on, have things in hand.

We need to distinguish, moreover, between perceptual and conceptual forms of disorientation for it is the latter that is characteristically at stake in politics. We may be disorientated when we wake up in an unfamiliar room or when we are caught in a dense fog. Then we don’t know whether to turn left or right and find ourselves frozen in place. Even in the case of perceptual disorientation we must, of course, distinguish between being and feeling disoriented. Waking up in an unfamiliar pitch-black room we may still believe that we understand its lay-out but then bump unexpectedly into a wall. But both being perceptually disoriented and feeling perceptually disoriented are different from not knowing how to describe our situation adequately or not being able to act politically in an appropriate way because we lack the concepts for analyzing where we are and where need to go.

To make these distinctions does not mean to downplay the importance of feelings of uncertainty or disorientation in politics. Such feelings of uncertainty and disorientation may generate unease, anxiety, even nausea and these can stop us from acting or can drive us into precipitous action. But such feelings are still secondary to actual states of uncertainty and disorientation which have a far more direct impact on what happens. Actual uncertainty and disorientation, instead of creating anxiety, are often accompanied by opposite feelings of certainty and orientation, the resulting smugness may have an even more devastating effect than felt uncertainty and disorientation.

These insights have been captured well in Plato’s Republic. In its seventh book we read of humans living in an underground cave – an allegory for social and political life as we know it. Tied down, hand and foot, the inhabitants of the cave can see only shadows on the wall before them, not what produced them and also not the world beyond their cave. They are not only ignorant of the things beyond their range of vision, they are also unable to understand their own situation and they can also therefore not conceive of any alternative to their pitiful state. If anyone of the inhabitants of the cave manages to turn around and sees what produced the shadow play, he will, however, be “pained and dazzled and unable to see the things” whose shadows he had seen before. (514c) And if he should actually reach daylight, he will be dazzled once more until his eyes have adjusted to the above ground reality. But should he return into the darkness of the cave, he would once again be confused and “behave awkwardly and appear completely ridiculous.” (517d) There are thus for Plato two states of political disorientation: the first when one comes from the darkness of ignorance into the light of knowledge and the second when one returns from this light into the darkness of human social life. The inhabitants of the cave are convinced that they know and understand reality, but they are, in fact, familiar only with shadows and lack the concepts to understand their actual situation. They are both ignorant and disoriented but feel all the while certain and oriented. By contrast, the one who escapes from the cave will at first be thrown into a state of confusion. His felt uncertainty will make him realize that he lacks the words to understand reality as it is. He will be moved therefore to acquire the concepts necessary to describe how things are and in what way the world of common human life is one of illusion. But when he returns to the human habitat and encounters the false certainties of its inhabitants, he may not fare well. They may deride and resent him and even seek to get rid of him in order to preserve their precious illusions.

The Atomization of Knowledge

We have learned that the ocean waves pulverize our plastic debris which is then consumed as dust by the fish we eat. The circle is closed and the poisons we have created come back to us in this altered form. The internet pulverizes human knowledge and feeds it back to us as unconnected bits of information. Our minds are bound to be ultimately  overwhelmed by all this new kind of poisonous debris.

Digital technology has had the peculiar effect of atomizing human knowledge and this in two ways. It has favored the creation of small bits of information which are passed around in digital messages. And it has overwhelmed our ability to concentrate on extended lines of reasoning. There is too much information, tempting us to move quickly from one bit to another. We are distracted by all these bits of knowledge that are offered to us so enticingly on all the websites of the world. This is already showing disastrously in our students who find it increasingly difficult to read whole books. We feed them instead with power point slides that contain carefully selected bits of information. Even this blog illustrates what is happening. Blogs are signals of the decreasing attention spans of those who write them and those who consume them.

One consequence of all this is that we find it increasingly difficult to weigh and assess the information that comes to us. We begin to believe things just because they have appeared somewhere on the internet. We lose our capacity to ask where this information comes from and who has authored it. The disunity of knowledge acquires thus a new and more extreme character. Human knowledge is a dispersed structure; there is disunity in it but there are also clusters of density and integration (theories, fields, disciplines, world-views). It is this equilibrium of unity and dispersion that is now coming undone. Click here

The result of all this is a wholly new condition for human knowledge. So, we need an epistemology that takes these developments into account. Call it a critical epistemology of the internet.

The disunity of knowledge

Our sharpest break with the tradition has come with the realization of the disunity of knowledge (of thought, the mind, the world, and pretty much else that concerns philosophy). We are no longer trying to construct “a system;” we are not looking for “the foundations” of a single structure; we have abandoned the belief in completeness and in our capacity to make everything cohere.

A vivid expression of this revolt against the entire philosophical tradition from Aristotle to Hegel is due to Nietzsche who declared his “profound aversion to reposing once and for all in any one total view of the world” and proclaimed, instead, the “fascination of the opposing point of view: refusal to be deprived of the stimulus of the enigmatic.” (The Will to Power, 470) The remark provides a key to Nietzsche’s writing and thinking. It helps to make sense of his aphoristic style as well as of his belief in many perspectives. Not that readers of Nietzsche have always appreciated this point. Nietzsche himself wrote in a sketch for his last book: “I mistrust all systems and systematizers; perhaps one [of them] will even discover behind this book the system I have sought to avoid. The will to system is a dishonesty for a philosopher.”

Another expression of this same idea is found in Wittgenstein’s later writings. He asks himself there what reasons he has for trusting text-books of physics and he answers: “I have no grounds for not trusting them. And I trust them. I know how such books are produced – or rather, I believe I know. I have some evidence, but it does not go very far and is of a very scattered kind. I have heard, seen, and read various things.” (On Certainty, 600) This is, of course, not a biographical note but meant to reveal the status of our usual claims to knowledge. What we call knowledge is, indeed, of a scattered kind. Linked to this thought is Wittgenstein’s realization that the mind (or soul or self) is not a unity – a conviction that the tradition has made a supporting pillar for its belief in the immortality of the soul. (A simple substance, it says, cannot disappear through a process of disintegration.)

Michael Foucault speaks of different discourses with their own distinctive internal rules and he points out that not everything possible is actually ever said. “We must look, therefore, for the principle of rarification or at least of non-filling of the field of possible formulations… The discursive formation is not therefore a developing totality, … it is a distribution of gaps, voids, absences, limits, divisions.” (The Archaeology of Knowledge, p. 119) And again, in slightly different language: “The archive cannot be described in its totality… It emerges in fragments, regions, levels…” (p. 130)

While Nietzsche, Wittgenstein, and Foucault agree that there is nothing uniquely foundational for philosophy to think about, they do not mean to say that it doesn’t matter what we make the subject of our thinking. Some philosophical questions are clearly more urgent than others. For us the decisive issue is now our individual, social, and political existence as human beings. The pressing issue is what it means to be human and all three, Nietzsche, Wittgenstein, and Foucault, wrestled with that.