Common Sense Philosophy/Chapter 17

From China Studies Wiki
Jump to navigation Jump to search

Chapter 17: Surveillance, Control, and the Emancipation from External Steering

The Eyes That Never Close

In 1787, the English philosopher Jeremy Bentham designed a prison he called the Panopticon -- a circular structure in which a single watchman, positioned in a central tower, could observe all inmates without the inmates being able to tell whether they were being watched at any given moment. The genius of the design, Bentham understood, was not that the watchman could see everything but that the inmates would behave as if he could. The mere possibility of surveillance would produce self-discipline. The prisoners would become their own jailers.

Bentham never built his prison. But his concept has achieved a realization so comprehensive that it would have astonished him. We live, in the early twenty-first century, in a global Panopticon -- a world in which the technological capacity to monitor, record, analyze, and predict human behavior has reached a level that makes the surveillance apparatus of every previous totalitarian regime look primitive by comparison. And the watchmen are not one but many: governments, corporations, algorithms, and the millions of networked cameras, microphones, and sensors that constitute the infrastructure of modern life.

This chapter examines the threat that pervasive surveillance poses to human freedom and dignity. It argues that the capacity for surveillance, in the hands of both states and corporations, has grown so rapidly and so far beyond the capacity of law and ethics to constrain it that it now represents one of the gravest obstacles to civilizational progress. And it proposes that the right to privacy -- the right to conduct one's life without being watched, tracked, profiled, scored, and manipulated by external powers -- is not a luxury but a civilizational necessity, as fundamental to human flourishing as the rights to free expression, free assembly, and freedom from arbitrary detention.

The Surveillance State

From Secret Police to Total Information Awareness

The desire of states to monitor their subjects is as old as the state itself. The Roman Empire deployed frumentarii (grain collectors who doubled as intelligence agents) to report on provincial unrest. The Venetian Republic maintained an elaborate network of informers. The French Revolution's Committee of Public Safety industrialized political surveillance. And the twentieth century's totalitarian regimes -- Nazi Germany's Gestapo, the Soviet Union's KGB, East Germany's Stasi -- developed surveillance to a degree that seemed, at the time, to represent the outer limit of what was humanly possible.

It was nothing of the kind. The Stasi, widely regarded as the most effective domestic surveillance agency in history, employed approximately 91,000 full-time agents and relied on an estimated 189,000 civilian informers to monitor a population of 16 million -- roughly one informer for every 63 citizens. This produced an archive of approximately 111 kilometers of files. Impressive as an exercise in bureaucratic thoroughness, it was also enormously labor-intensive, inherently limited by the number of human observers available, and ultimately unable to prevent the collapse of the regime it served.[1]

Contemporary digital surveillance operates on an entirely different scale. A single modern intelligence agency can collect, store, and analyze more data in a day than the Stasi accumulated in four decades. The revelations by Edward Snowden in 2013 exposed the scope of the U.S. National Security Agency's surveillance programs: the bulk collection of phone metadata for hundreds of millions of Americans; the PRISM program, which accessed data directly from the servers of major technology companies; the XKeyscore system, which could search the content of virtually all internet communications; and collaborative arrangements with the intelligence agencies of the United Kingdom, Canada, Australia, and New Zealand (the "Five Eyes" alliance) that effectively created a global surveillance network.[2]

The technological infrastructure that makes this surveillance possible -- the internet, mobile telephony, cloud computing, GPS, facial recognition, and the ever-expanding "Internet of Things" -- was not designed for surveillance. It was designed for communication, commerce, and convenience. But the same features that make these technologies useful also make them instruments of monitoring: every phone call generates metadata, every internet search creates a record, every credit card transaction leaves a trace, every smartphone broadcasts its location, and every social media post reveals preferences, associations, and states of mind.

The result is that the default condition of modern life is surveillance. You are tracked when you drive (automatic license plate readers), when you walk (CCTV cameras with facial recognition), when you communicate (email and phone metadata collection), when you shop (purchase histories linked to loyalty cards and credit cards), when you browse the internet (cookies, tracking pixels, browser fingerprinting), when you use a smartphone (location data, app usage data, microphone and camera access), and increasingly when you simply exist in a public space (gait recognition, emotion detection, biometric identification).

China's Social Credit System: The Panopticon Realized

If any contemporary system represents the full realization of Bentham's Panoptic vision, it is China's Social Credit System (SCS) -- a comprehensive framework for monitoring, evaluating, and scoring the behavior of individuals, businesses, and government officials, with real consequences for those who score well (preferential access to services, loans, and opportunities) and those who score poorly (restrictions on travel, education, employment, and social participation).

The SCS, as implemented through various municipal pilot programs and the national "blacklist" system, integrates data from an extraordinary range of sources: financial records, court judgments, traffic violations, online behavior, social media posts, purchasing patterns, and -- in some implementations -- reports from neighbors and community monitors. The resulting "social credit score" functions as a comprehensive behavioral profile that determines an individual's access to the privileges of modern life.

The system's defenders, including Chinese government officials, argue that it promotes trust in a society that has experienced rapid modernization, widespread fraud, and a deficit of institutional trust. They point to the system's role in combating business fraud, enforcing court judgments, and improving regulatory compliance. These are not trivial concerns -- trust is indeed essential to a functioning society, and China has legitimate reasons to worry about it.

But the costs are staggering. The SCS represents the most ambitious attempt in human history to subject human behavior to comprehensive external control. It transforms every aspect of daily life into a performance evaluated by an opaque algorithmic system. It creates a society in which the rational response to any situation is not "what do I think is right?" but "what will maximize my score?" It punishes dissent, nonconformity, and deviation from approved norms -- not through the crude mechanisms of arrest and imprisonment (though those remain available) but through the subtler and in some ways more effective mechanism of social and economic exclusion.

The philosopher Byung-Chul Han, in his analysis of what he calls "psychopolitics," has argued that this form of control is more insidious than traditional authoritarianism precisely because it does not feel like control. Where the Stasi relied on fear and coercion, the social credit system operates through incentives and nudges. Where the Stasi's subjects knew they were being oppressed, the subjects of the social credit system may experience their compliance as voluntary -- even virtuous. The internalization of surveillance, Han argues, is the perfection of domination.[3]

The Western Surveillance State: Different but Not Innocent

It would be a mistake -- and a politically convenient one -- to treat the surveillance state as a uniquely Chinese or authoritarian phenomenon. Western democracies have built surveillance capabilities that differ from China's in their institutional framework and stated purposes but rival it in technical capacity.

The post-9/11 security state in the United States has produced: the Patriot Act, which expanded government surveillance powers with minimal judicial oversight; the Foreign Intelligence Surveillance Court, which operates in secret and has approved virtually every government surveillance request; the mass collection of communications metadata by the NSA; the FBI's use of national security letters to compel the disclosure of financial and communications records without judicial approval; and the Department of Homeland Security's extensive border surveillance apparatus, including warrantless searches of electronic devices at ports of entry.

The United Kingdom's Investigatory Powers Act of 2016 -- colloquially known as the "Snooper's Charter" -- grants government agencies the power to access internet connection records, to conduct bulk interception of communications, and to require technology companies to remove encryption. France, Germany, and other European democracies have expanded surveillance powers in response to terrorism, often with inadequate safeguards and oversight.

The difference between democratic and authoritarian surveillance is real but narrower than democratic publics like to believe. The institutional safeguards that distinguish democratic surveillance -- judicial oversight, legislative accountability, a free press capable of exposing abuses -- are important. But they are also fragile. The Snowden revelations demonstrated that even in the United States, with its robust constitutional protections and independent judiciary, government surveillance programs operated for years with minimal oversight, dubious legality, and systematic deception of the public and its elected representatives.

The honest conclusion is that the surveillance state is a feature of modernity, not of any particular political system. The technology that enables it is available to all governments, and the temptation to use it is universal. The question is not whether governments will surveil their citizens -- they will, and they already do -- but whether effective constraints exist to prevent surveillance from destroying the privacy and autonomy that make a free society possible.

Corporate Surveillance: Zuboff's Surveillance Capitalism

The Business Model of Behavioral Prediction

The most original and alarming analysis of contemporary surveillance is not about governments at all. It is about corporations. Shoshana Zuboff, in The Age of Surveillance Capitalism (2019), describes a new form of capitalism in which human experience is treated as free raw material -- "behavioral surplus" -- to be harvested, processed, and sold as predictions of future behavior.[4]

The basic structure is this. Companies like Google, Facebook (Meta), and Amazon provide services that are ostensibly free -- search, social networking, shopping, email, cloud storage. In exchange, they collect vast amounts of data about their users: what they search for, what they click on, who they communicate with, where they go, what they buy, what they read, what they watch, how long they linger on a page, what makes them angry, what makes them happy, what makes them vulnerable.

This data is then analyzed using machine learning algorithms to produce detailed behavioral profiles -- models that predict what a user will do, think, want, and feel. These predictions are then sold to advertisers, political campaigns, insurance companies, employers, and anyone else willing to pay for the ability to influence human behavior.

The result is an economic system in which the most profitable activity is not producing goods or services but producing behavioral predictions -- and in which the accuracy of those predictions depends on the depth and comprehensiveness of surveillance. The more data companies collect about us, the better they can predict (and influence) our behavior, and the more valuable those predictions become. The economic logic drives inexorably toward total surveillance, because every unmonitored moment is a missed opportunity for data extraction.

The Manipulation Machine

Surveillance capitalism is troubling not only because it violates privacy but because it enables manipulation on a scale and with a precision that previous generations could not have imagined.

The algorithms that power social media platforms are not designed to inform, educate, or connect. They are designed to maximize "engagement" -- to keep users on the platform as long as possible, because more time on the platform means more data to collect and more advertising to sell. Research has consistently shown that the content that generates the highest engagement is content that provokes strong emotions -- anger, outrage, fear, disgust. The algorithms therefore systematically amplify the most extreme, divisive, and emotionally provocative content, regardless of its accuracy or social value.[5]

The consequences are visible in every democratic society: the erosion of shared reality, the fragmentation of public discourse into mutually hostile echo chambers, the rise of conspiracy theories and disinformation, the radicalization of vulnerable individuals, and the degradation of democratic deliberation. These are not unintended side effects of well-meaning technology. They are the predictable consequences of a business model that profits from emotional manipulation and has no economic incentive to stop.

The "nudging" paradigm -- the use of behavioral science to influence choices through the design of decision environments -- adds another layer to the manipulation apparatus. Cass Sunstein and Richard Thaler popularized nudging as a tool for benevolent paternalism: designing cafeteria layouts to encourage healthy eating, setting default options for retirement savings, simplifying enrollment forms for public programs. In the hands of states and corporations with less benevolent intentions, the same techniques become instruments of control: manipulating consumer choices, shaping political opinions, and directing behavior toward outcomes that serve the manipulator rather than the manipulated.[6]

The Philosophy of Privacy

Why Privacy Matters

Privacy is often treated as a secondary value -- nice to have but not essential, and certainly less important than security, efficiency, or convenience. This is a profound mistake. Privacy is not a luxury. It is a precondition for virtually every other freedom.

The argument has several dimensions. First, privacy is essential to autonomy. To be autonomous is to be self-governing -- to form one's own beliefs, make one's own decisions, and live according to one's own values. This requires a space -- psychological and physical -- in which one can think, experiment, make mistakes, and develop without being watched, judged, and corrected by external observers. A person who knows that every thought they express, every book they read, every website they visit, every person they meet with is being recorded and analyzed will inevitably self-censor -- will avoid the heterodox thought, the unpopular opinion, the risky experiment. The result is not security but conformity, and conformity is the death of autonomy.

Second, privacy is essential to dignity. The capacity to control what others know about us is a fundamental dimension of selfhood. We present different aspects of ourselves in different contexts -- to our employers, our friends, our lovers, our children -- and this selective self-presentation is not dishonesty but an expression of the complexity of human identity. A system that strips away this complexity -- that reduces a person to a comprehensive data profile accessible to anyone with the right clearance or the right payment -- denies the person's status as a self-determining agent and treats them as an object of analysis.

Third, privacy is essential to democracy. Democratic politics depends on the ability of citizens to form and express political opinions without fear of retaliation. This has been understood since the introduction of the secret ballot in the nineteenth century: if your vote is public, it can be coerced. The same principle applies to political expression more broadly. Citizens who know that their political views are monitored by the government -- or by their employer, their insurance company, or an algorithm that will adjust their social credit score -- will self-censor, and the result will be a degraded, superficial, and ultimately meaningless political discourse.

Fourth, privacy is essential to intimacy. The philosopher Julie Inness has argued that privacy is constitutive of intimate relationships: that what makes a relationship intimate is precisely the sharing of information, experiences, and vulnerabilities that one does not share with the world at large.[7] In a world without privacy, intimacy becomes impossible -- because there is nothing left to share selectively, nothing that distinguishes the knowledge that lovers have of each other from the knowledge that an algorithm has of them.

Foucault and the Internalization of Control

Michel Foucault's analysis of the Panopticon, in Discipline and Punish (1975), remains the most penetrating account of how surveillance transforms the relationship between power and the individual. Foucault argued that the genius of the Panoptic design was not its capacity for observation but its capacity for producing self-disciplined subjects. When you cannot tell whether you are being watched, you must behave at all times as if you are. The surveillance is internalized: the external eye becomes an internal monitor, and the individual becomes, in effect, their own prison guard.[8]

Foucault saw this process operating not only in prisons but in all the "disciplinary institutions" of modern society -- schools, hospitals, factories, barracks. Each of these institutions subjects individuals to regimes of observation, classification, and normalization that produce "docile bodies" -- subjects who have internalized the norms of the institution and who regulate their own behavior accordingly.

The digital Panopticon extends this process beyond anything Foucault imagined. The disciplinary institutions Foucault analyzed were spatially bounded -- you could leave the school, the factory, the hospital. The digital Panopticon has no boundaries. It follows you everywhere -- into your home, your bedroom, your most private moments. It knows what you read, what you search for at 3 a.m., what you whisper to your phone's voice assistant, what your face reveals about your emotional state as you walk down the street.

And the response of the surveilled subject is exactly what Foucault predicted: self-discipline, self-censorship, and the gradual internalization of the norms that the surveillance enforces. Studies have consistently shown that people who are aware of being surveilled change their behavior -- they search for less controversial topics, express fewer dissenting opinions, avoid associations that might be flagged as suspicious, and generally conform to a blander, safer, more predictable version of themselves. This is the "chilling effect" -- and it operates even when the surveillance is entirely legal and the person has "nothing to hide."

The Right to Be Left Alone

A Positive Program

If the analysis in this chapter is correct, then the defense of privacy in the digital age requires not just philosophical argument but institutional action -- laws, regulations, technical standards, and cultural norms that impose meaningful constraints on the surveillance capacity of both states and corporations.

First, encryption must be protected as a fundamental right. End-to-end encryption -- the technology that ensures only the sender and recipient of a communication can read its content -- is the single most effective tool for protecting privacy against both government surveillance and corporate data harvesting. Governments that demand "backdoors" in encryption -- the ability to access encrypted communications under certain circumstances -- are demanding the destruction of privacy itself, because a backdoor that exists for one purpose can and will be exploited for others. The technical reality is unambiguous: either communications are encrypted and private, or they are not. There is no middle ground.

Second, data minimization must become the legal default. Companies should be prohibited from collecting data they do not need for the specific service they provide. A search engine does not need to know your location, your browsing history, or your social connections to return search results. A messaging app does not need to analyze the content of your messages to deliver them. The current model, in which companies collect the maximum possible data on the theory that it might someday be useful, must be replaced by a model in which data collection is limited to what is necessary and proportionate.

Third, meaningful consent must be required. The "consent" that users currently give to data collection -- by clicking "I agree" on a multi-page terms of service document that no one reads -- is a legal fiction. Genuine consent requires that the person understand what they are agreeing to, have a meaningful alternative if they refuse, and be able to withdraw consent at any time. In practice, this means that consent to data collection must be opt-in rather than opt-out, that services must be available to users who refuse to share their data, and that data subjects must have the right to have their data deleted.

Fourth, algorithmic transparency must be mandated. When algorithms make decisions that significantly affect people's lives -- credit decisions, hiring decisions, insurance pricing, criminal sentencing, social media content curation -- the people affected must have the right to know how those decisions were made, to challenge them, and to have them reviewed by a human being. The current regime of algorithmic opacity, in which life-altering decisions are made by proprietary systems whose workings are trade secrets, is incompatible with basic principles of due process and accountability.

Fifth, the surveillance powers of the state must be constrained by law. Government surveillance should require judicial authorization based on individualized suspicion. Bulk data collection -- the indiscriminate harvesting of communications data without regard to whether the individuals monitored are suspected of any wrongdoing -- should be prohibited. Intelligence agencies should be subject to meaningful oversight by elected legislatures. And whistleblowers who expose illegal surveillance should be protected, not prosecuted.

Sixth, digital rights must be recognized as human rights. The right to privacy, the right to encryption, the right to anonymity online, the right to be free from algorithmic manipulation, and the right to "digital self-determination" -- the right to control what data is collected about you and how it is used -- must be recognized as fundamental rights, protected by law and international treaty, with the same status as freedom of expression and freedom from torture.

The Objection from Security

The standard objection to all of the above is security: that surveillance is necessary to prevent terrorism, crime, and other threats to public safety, and that constraining surveillance will cost lives.

This objection must be taken seriously. Terrorism is real. Crime is real. The desire to be safe is legitimate. But the security argument must be evaluated honestly rather than accepted as a trump card that overrides all other considerations.

The empirical evidence for the effectiveness of mass surveillance in preventing terrorism is remarkably thin. The U.S. government's own review boards, established after the Snowden revelations, found that the NSA's bulk metadata collection program had not demonstrably prevented a single terrorist attack. The most devastating terrorist attacks in the West -- 9/11, the London bombings, the Paris attacks, the Brussels bombings -- were carried out despite extensive surveillance capabilities and, in most cases, despite specific intelligence warnings about the attackers that were missed or not acted upon. The problem, in other words, was not a lack of data but a failure to analyze and act on the data already available -- a problem that mass surveillance exacerbates rather than solves by burying relevant intelligence in a sea of irrelevant information.[9]

More fundamentally, the security argument operates on a logical fallacy: the assumption that any measure that might conceivably improve security is justified, regardless of its costs. By this logic, the government could install cameras in every private home, require GPS tracking devices in every citizen's body, and prohibit all encrypted communication -- and each of these measures could be defended on the grounds that it might prevent some crimes. The question is not whether surveillance can improve security (it can, in some circumstances and to some degree) but whether the improvement justifies the costs -- in privacy, in autonomy, in democratic health, and in the trust between citizens and their government.

A society that sacrifices freedom for security ends up with neither. This is not a theoretical abstraction. It is the empirical lesson of every authoritarian regime that has justified the repression of its citizens on grounds of security -- from the Soviet Union to East Germany to the post-9/11 security state. The price of security through surveillance is a society of conformity, self-censorship, and institutional distrust -- a society that may be safe from external threats but has surrendered the internal freedoms that make safety worth having.

Conclusion: The Architecture of Freedom

The surveillance threat is, at bottom, a design problem. The technologies that enable surveillance -- the internet, mobile computing, artificial intelligence -- are not inherently hostile to freedom. They are tools, and tools can be designed to protect privacy as easily as they can be designed to destroy it. End-to-end encryption, zero-knowledge protocols, decentralized systems, privacy-by-design principles, and open-source software all demonstrate that it is technically possible to build a digital infrastructure that respects human privacy and autonomy.

The obstacle is not technological but political. Governments want the power to surveil because surveillance is power. Corporations want the ability to monitor because monitoring is profit. And the public, lulled by convenience and reassured by the hollow promise of security, has been slow to recognize the magnitude of the threat.

The task of a post-ideological philosophy is to name this threat clearly and to insist that the design of our technological infrastructure is not a technical question but a moral and political one. The choice between a society of surveillance and a society of freedom is being made now, in the design of the systems we use every day, in the laws we enact (or fail to enact), and in the norms we accept (or refuse to accept). It is a choice that will shape the conditions of human life for generations to come.

And it is a choice that, if we are serious about the values defended in this book -- human dignity, individual autonomy, democratic self-governance, the right to think freely and live without coercion -- admits of only one answer. The right to be left alone is not negotiable. The architecture of freedom must be built, defended, and maintained with the same seriousness and the same urgency that we bring to the defense of any other fundamental right. Because a society that watches everything controls everything. And a society that controls everything is not a civilization. It is a cage.

  1. The most comprehensive English-language account of the Stasi's surveillance apparatus is Anna Funder, Stasiland: Stories from Behind the Berlin Wall (London: Granta Books, 2003). For the statistics, see Helmut Muller-Enbergs, ed., Inoffizielle Mitarbeiter des Ministeriums fur Staatssicherheit (Berlin: Ch. Links Verlag, 1998).
  2. The most comprehensive journalistic account of the Snowden revelations is Glenn Greenwald, No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State (New York: Metropolitan Books, 2014). The technical details are documented in the archive of NSA documents published by The Intercept.
  3. Byung-Chul Han, Psychopolitics: Neoliberalism and New Technologies of Power, trans. Erik Butler (London: Verso, 2017). Han's argument that neoliberal psychopolitics is more effective than disciplinary power because it exploits freedom rather than suppressing it is provocative and important.
  4. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: PublicAffairs, 2019). Zuboff's concept of "surveillance capitalism" has become the standard framework for understanding the political economy of digital platforms.
  5. The relationship between algorithmic amplification and political polarization has been documented by numerous studies. See, in particular, Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You (New York: Penguin, 2011); and the internal research by Facebook's own data scientists, disclosed by whistleblower Frances Haugen in 2021.
  6. Richard H. Thaler and Cass R. Sunstein, Nudge: Improving Decisions about Health, Wealth, and Happiness (New Haven: Yale University Press, 2008). For a critical perspective, see Luc Bovens, "The Ethics of Nudge," in Preference Change: Approaches from Philosophy, Economics and Psychology, ed. Till Grune-Yanoff and S.O. Hansson (Berlin: Springer, 2009).
  7. Julie C. Inness, Privacy, Intimacy, and Isolation (Oxford: Oxford University Press, 1992). Inness's argument that privacy is constitutive of intimacy, rather than merely instrumental to it, remains one of the most important contributions to the philosophy of privacy.
  8. Michel Foucault, Discipline and Punish: The Birth of the Prison, trans. Alan Sheridan (New York: Vintage Books, 1977; French original, 1975). Foucault's analysis of the Panopticon appears in Part Three, Chapter 3.
  9. The Privacy and Civil Liberties Oversight Board (PCLOB), in its January 2014 report on the NSA's telephone records program, found that the program had not been essential to preventing any terrorist attack. See PCLOB, Report on the Telephone Records Program Conducted under Section 215 of the USA PATRIOT Act and on the Operations of the Foreign Intelligence Surveillance Court (Washington, DC, 2014).