Those who are most enamored of social order must, to remain strong, tolerate, even promote dissidence and opposition; because, to remain enthusiastic and based on common belief [the political order] needs a constant influx of new discoveries and initiatives, that prod and rouse it with the sharp end of their strangeness.
Gabriel Tarde, La logique sociale, 1895
Inventions often have a charisma that exceeds their utility. They invite positive values, welcome new possibilities for human betterment, feed the fertile imaginations of inventors and revolutionaries, and provide tools for dissent and different thinking. Like all charismatic figures, the power of new technologies sometimes has a fascinating element of unpredictability, of possibilities for spilling over whatever boundaries of control we might construct or imagine. True believers will bracket warnings out, refusing to see or accept the possibility of danger in a force with such potential for good, and hold on steadfastly to hope as a complete and permanent answer to the perils that stand directly before them.
This is especially clear with the historic jolt brought about by the emergence of the interrelated technologies of artificial intelligence (AI), smartphones, and social media. Despite their potential to bring about dramatic change and imaginings of a new world, and despite their occasional personification, the charisma of new information technologies (ITs) is not the same as that of the prophets and cult leaders of earlier times. The popularity of technical innovation either takes off or it doesn’t—no need for the sword, the whip, and the auto-da-fé to keep a loyal following. The charisma of technology’s power lies almost exclusively in its ability to maintain recognition by “proving itself” with remarkable, seemingly supernatural feats, on which its devoted followers base their unquestioned loyalty.1
An example of this kind of blind attraction to innovation comes from Miguel Luengo Oroz, chief data scientist for Global Pulse, a joint United Nations–Google initiative that uses big data, artificial intelligence, and other ITs to advance the goals of humanitarian intervention and policy implementation. Oroz is inspired in his work above all by a vision of humanity unified by the surveillance capacities of emerging technologies: “Before 2030,” he proclaims, “technology should allow us to know everything from everyone to ensure no one is left behind. For example, there will be nanosatellites imaging every corner of the earth allowing us to generate almost immediate insights into humanitarian crises.”2
Another example comes from the UN’s Department of Economic and Social Affairs, which, in a newsletter sent out regularly via email to subscribers, offers an unabashedly Panglossian view of the emergence of what it calls “frontier technologies”:
Frontier technologies are innovative and often grow fast, with the potential to transform societies, economies and the environment. In recent years, we have seen examples of this in the form of artificial intelligence and machine learning, renewable energy technologies, energy storage technologies, electric and autonomous vehicles and drones, genetic engineering, as well as cryptocurrencies and blockchains. These frontier technologies can help eradicate hunger and epidemics, increase life expectancy, reduce carbon emissions, automate manual and repetitive tasks, create decent jobs, improve quality of life and facilitate complex decision-making processes. In other words, these technologies can make sustainable development a reality, improving people’s lives, promoting prosperity and protecting the planet.3
From this, it seems, one would need nothing more to bring about a perfect world than to harness the inevitable force of technical invention and apply it to projects for the social good.
For many who read these lines, the powers of frontier technologies do not offer unreserved comfort, never mind visions of utopia. The idea of nanosatellites probing every corner of the earth and transmitting information on our every movement invokes in our imaginations the more sinister uses of technology. Chillingly, we are not told what person or agency will be receiving this information about our every movement. All that seems missing is some mechanism to probe and transmit our thoughts—perhaps by 2050.
Technological optimism, however, cannot be assumed as a natural accompaniment of innovation. Writing on the history of technology in the immediate aftermath of World War II, a time when the power of invention had been on full display and invited critical reflection, Sigfried Giedion noted a climate of skepticism toward the tools of modernity: “Now,” he writes, “it may well be that there are no people left, however remote, who have not lost their faith in progress.”4 The trend that Giedion observed is still part of the way that many respond to innovation in the technology sector: almost reflexively understanding that under the control of centralized power, ITs serve as tools for abuse by those seeking or securing centralized power. Advances in the uses of big data mean that liberties can be trampled underfoot, minds corrupted, faiths destroyed, new methods of violence put into action. Ultimately, by small degrees, according to the most jaded view, revolutionary innovations in technology bring in their wake the mobilizations of armed men and smoke rising from villages.
Sometimes, however, individuals are torn between these possibilities and wage an inner struggle for the primacy of one attitude over the other. While recognizing the power of a technology to alter human lives, they just can’t be sure if the menace they see is real or if lilies are about to bloom in the desert. This is the struggle that, for want of an existing term, I call the dialectics of despair. It is a theme relating to new technologies that gets evoked constantly, and it came up so often in the writing of this book that I usually had to leave it on the page without comment. It is one of those ideas that are in the ether of the era.
Alternation or indecision between the ideas of an imminent media-induced utopia or a condition of dystopic human enslavement is now shared by contemporary analysts of the emerging social conditions being shaped by new ITs, except that in mainstream media at least, the dismal view seems to have gotten the upper hand. Many analysts went through a period of euphoria at some point in the early 2000s as innovations leaped ahead, creating new forms of engagement, connection, and organized dissent. Many technology experts and activists were thrilled at encountering the potential of new ITs to create networks and social movements with unprecedented reach and power. They seemed to have picked up where Marshall McLuhan left off in The Gutenberg Galaxy, with his almost breathless anticipation of the age of computers, which “promises by technology a Pentecostal condition of universal understanding and unity” in which languages are bypassed “in favor of a general cosmic consciousness.”5 We can safely assume that those writing in the full bloom of the internet era were not optimistic, as McLuhan was, in the way of anticipating an imminent global hive mind “that extends our senses and nerves in a global embrace”6 but more with a sense of a coming era of libertarian freedom, with messaging, networking, and open source platforms celebrated for their capacity to infuse capitalism with creative vitality on a utopian scale or to enable effective resistance against corporate abuses and state violence while ushering in new collective values.
Then came the fall. The post-9/11 architecture of state intelligence gathering and the descent of the Arab Spring into hyperrepressive governments and the civil wars of Syria and Libya were major shocks experienced by many tech savvy activists. Encroachments on privacy, the virulence of anonymous messaging, and the development of new, more powerful practices of surveillance—these are continuing problem areas highlighted by a more cynical take on technologies, with important implications for those concerned with the future of civic freedoms and human rights. A main current of analysis and media reporting of technology takes the form of a morality tale of oppression/opposition in which unseen and unknown powers are aligned against digitalized activists, prepared to leap into collective action in campaigns of “hactivism,” whistle-blowing, and “netizen action.”7 As a result of this framing, when it comes to analyzing the technologies that have implications for human rights, opinions tend to be committed in one direction or another in a with-us-or-against-us sort of way. Even if scholars try to be nuanced or at least to give opposing viewpoints their due, people tend to lean toward visions of either technologically reinforced structure or empowered agency, either fatalism or hope.
What appears to be inconsistency in these views, however, is actually an outcome of inherent and unresolved contradictions in the dynamics between innovation and power. We are living in a time in which the technologies bringing about sweeping change—with new ones in “alpha” and “beta” stages of being tested, waiting in the wings and soon to be introduced—have not fully coalesced into clear patterns. Some state and corporate actors are subjecting those engaged in protest and dissent to the uses of technology in surveillance, censorship, and the often-violent narratives of strategic falsehoods. At the same time, dissidents are putting a spotlight on these invasive structures of state and corporate power through new forms of protest, policy, and counterinvention—the “digital civil society” that has become necessary for the new technologically enabled powers to be checked and kept within bounds.8 Or, to express this tension in another way, technologies of legal advocacy are facilitating new forms of transnational organization and public outreach while, at the same time, these very technologies have begun to transcend human will and agency, creating vacuums of transparency and opportunities for domination, surveillance, and stifling of dissent. A digital arms race between states and dissidents, mediated by big tech corporations that are playing both sides, has yet to be decided. We have every reason to hopeful and every reason to be afraid.
The new and emerging conditions I have just described provoke a key question: What happens if, instead of considering human rights only in juridical terms, we were to also look at them from the perspective of the technologies of human control and persuasion? This brings me to the central thread that connects the various chapters of this book: Within the past decade (or just a bit longer), a dramatic shift has taken place in the technological environment in which human rights are situated. Computers with unprecedented speed and capacity, together with innovations in artificial intelligence and machine learning, are at the heart of a technological revolution that is shifting the foundations of knowledge and public influence. The powers of repression and dissidence have each been amplified, ushering in a new era of surveillance capitalism,9 digital state power, and, struggling to keep pace, war crimes investigations and human rights advocacy.
In this book, I situate human rights within a technologically thick setting of contest and contradiction. It represents an effort to put together some of the main attitudes to transformation in human life and possibility, each with its own discourses of utopia and despair. Human rights provide a lens that magnifies and brings focus to these attitudes toward the nexus of new technologies and old freedoms. They offer a clearer sense of what is at stake and why passivity is not an option for those committed to preventing a dystopian future, of either the Orwellian (the extended reach of state power) or Huxleyan (the cultivation of a comfortably numb, passive compliance) kind.
While the first inclination in response to the developments I have just outlined is to focus on the “shiny new things” of technological applications—including the concerns that they elicit—I propose to also take an approach that is grounded in personal encounter with justice claimants and their messages. Starting with the justice claims of the marginalized provides a more realistic understanding of how ITs are being used, in conjunction with a wide array of other technologies, in the interests of countering the narratives of states and bringing the claims of the marginalized into public recognition. Considering what justice activists actually do will often reveal a wide spectrum of media uses, far more diverse than the stereotype of cyberactivists equipped with mind-boggling computer systems and skills (though I discuss this kind of actor too). Ultimately, some of the oldest media used to sway opinion—including graffiti, placards, monuments, and printed leaflets—are being used alongside social media and mainstream journalistic media coverage (still the ultimate prize) in campaigns that promote such causes as peaceful solutions to the main human rights challenges, including armed conflict targeting civilians, forced displacement, genocide, and inhumane treatment of refugees.
1. On extraordinary feats in charismatic authority, see Weber, 1946, From Max Weber: Essays in Sociology, 246.
2. Miguel Luengo Oroz, 2018, “From Big Data to Humanitarian in the Loop Algorithms,” United Nations Global Pulse, 22 January 2018, https://www.unglobalpulse.org/news/big-data-humanitarian-loop-algorithms. For a report on the work of Global Pulse in the area of social media and forced displacement, see http://unglobalpulse.org/sites/default/files/White%20Paper%20Social%20Media %203_0.pdf. I am grateful to Maria Sapignoli for bringing this material to my attention.
3. UNDESA, 2018, “Frontier Technologies.”
4. Giedion,  2013, Mechanization Takes Command, 715.
5. McLuhan,  2011, The Gutenberg Galaxy, 114.
6. McLuhan,  2017, Understanding Media, 80.
7. MacKinnon, 2012, “The Netizen.”
8. MacKinnon, 2012, “The Netizen.”
9. Zuboff, 2019, The Age of Surveillance Capitalism.