Chapter One Excerpt for AI and Assembly
Introduction
Toussaint Nothias Lucy Bernholz
What happens in the virtual world happens in the real world. That thing where people think they’re different, disabuse your mind of that. There is only one world because we live in both worlds.
Maria Ressa (2022)
Artificial intelligence, predictive software, and automated decision-making tools have moved from the lab into everyday life in ways similar to how Hemingway described bankruptcy: “gradually, and then suddenly.” Driven by massive stores of digital data and storage, increasingly powerful computing systems, and competition between both firms and nation-states, artificial intelligence is seemingly everywhere. It is built into our physical systems for energy and transportation management; it powers social media platforms and search engines; it undergirds ever more administrative work and can be found deeply embedded in medical research; educational services; health care; insurance; criminology and judicial systems; social welfare administration; public and organizational policy enforcement; customer service; and home or office lighting, security, and heating controls.
The metastization of AI has galvanized harm prevention scholars and advocates; arguably, there are as many foci of concern about AI as there are implementations of it. Concerns about the existential risks of AI, for example, have led some technologists to insist that the research process be open so that there can be some form of public governance. Others emphasize the geopolitical battles to control the development and use of AI, framing it as an issue of national security and competition. Still others work to de-bias existing systems or advocate for the prohibition of some systems altogether. All of these concerns have a bearing on the focal issue of this book: our ability and freedom to assemble in a world taken over by AI.
Freedom of assembly is a core human right. We assemble when we attend a protest, join a march or rally, organize a community event, or attend a public meeting. Whether you are part of a religious organization, a union, a parent-teacher association, a volunteer community, or a neighborhood group, your involvement in these activities implies some form of assembly. Much like freedom of thought and expression, freedom of assembly is a foundation that ensures and encourages civil society and democratic participation. In this book, we use the term “assembly” as broadly as possible. We neither confine ourselves to the human rights definition and purpose nor abandon it. While some authors make references to the computer science use of the term, we are focused on the human act of gathering. We use “assembly”—and its legal cousin “association”—with a vernacular familiarity, not as professional jargon. How do AI-powered systems change how we gather and how do our gatherings change these systems? This is perhaps the most inclusive version of the question that this volume asks.
Questions of assembly, gathering, and community building used to be central to our understanding of digital systems and their impact on society. In the earliest days of the public internet, scholars, advocates, and users celebrated the community-building aspects of the technology. From studies of New York City’s ECHO to San Francisco’s WELL, online spaces were celebrated for the networks of people who found and supported each other. Early social media sites, such as MySpace and Friendster, aimed to help people find “their people,” whether they defined this by identity, geography, social interests, or political allegiances. However, the earliest steps at American regulation, including the 1996 Communications Decency Act, concentrated attention on the internet’s link to expressive behavior, a focus that has moved in lockstep with the rise of corporate megaliths such as Google and Facebook. Today, most public discussion, legal analysis, and even safety concerns are examined through the lens of free expression as evidenced in the emphasis on online speech and mis-/disinformation (e.g., Benkler, Faris, and Roberts 2018; Kaye 2019; Persily and Tucker 2020).
This is an incomplete and, we argue, insufficient focus. Scholarship and industry practice surrounding social media and search engines—two dominant AI-powered technologies of the early twenty-first century—over-index on expression as the area of concern. This is due, at least in part, to communications laws in the United States, which have played a defining role in shaping global social media sites, protecting expressive rights. This emphasis made sense when most of our interactions with digital data and their attendant analytic systems occurred through a screen. But this is no longer the case. Software-driven sensors now collect, analyze, and make data-driven decisions about us when we are using screens and when we are not. When we are inside our homes and outside in public. When we are in transit or asleep. When we are alone and when we are together. Software, data, and analyses are found throughout our physical spaces, sometimes obscured but often hiding in plain sight. As we move through space, so do the software and AI we carry in our phones or wearable devices. Even when we do not carry these devices, we are tracked by sensors in cars, buses, buildings, parks, and other public spaces. These phenomena require not simply shifting our research focus from expression to assembly. They require an update to our understanding of where and how assembly takes place, what it is, and who participates by choice and who by coercion.
More than three decades after the excitement of modem-enabled community building, we return questions of assembly and association to the foreground and find that a great deal has changed. The shift from screen-based to physical sensors, for example, makes our movements and gatherings as valuable as data-collecting opportunities as social media made our words, images, and social graphs. Not only is what we “say” captured and analyzed by online sites; where we go, what we do, and who we do it with are now also captured in digital representations. The data we generate flow across any preconceived boundaries between physical and digital space.
This plethora of data—and our ability to save and store it, mix it in endless recombinations, and analyze it through multiple lenses, sometimes simultaneously—powers AI and undermines analogue-era understandings of privacy, expression, assembly, and association. Its mere collection may present privacy violations. The substance or content represented by the data may raise expressive issues. And the ability to see data about multiple people in a single place, or interactions between data from different individuals, or spatial locations and relationships between data points threatens private association and public assembly.
Digital and physical spaces are not only expressive spaces; both are places for assembly and association as well. The prevalence of AI in physical space requires us to do more than consider expression beyond the screen. It requires us to consider the interactions between expression and assembly, the very meaning of assembly, and the ways in which digital sensors on screen and in physical spaces serve as a digital sense-taking infrastructure that duly tracks our every word, movement, and gathering.
Wherever we are, we generate data. As our gatherings become digitized, we urgently need to consider the implications of large-scale, semipermanent, inaccessible data trails for how and where and with whom we can take collective action. Assembly and association, two basic human rights, are different in the digital age from what they were in the analogue era. For one thing, time and distance—key differentiators of the two concepts in predigital times—are more complicated ideas in the digital age. Digital systems allow for remote and asynchronous participation in ways that leak through established legal and human rights jurisprudence. Digital mobilization interacts with physical assembly, and physical assembly can lead to new digital associations. AI systems, trained on massive amounts of data, may suggest connections and relationships in ways that remove or at least alter the degree to which we choose with whom we gather. In this way, the chances that we are being assembled not by our own choice but by machine learning or automated pattern detection only increase. Finally, concerns about AI—from its biases to how it is governed—are bringing scholars and activists together to create old-form associations addressing new technological challenges.
No single book can cover all of the ways our dependencies on AI-enabled digital systems are entangled with our sense of community, our ability to gather and our practices of gathering, and our rights to assembly and association. The chapters that follow examine many of these dynamics through diverse perspectives: human rights, organizational behavior, surveillance, discrimination, language and culture, algorithmic power, and others. Each contributor joined this project, however, because of a shared sense that efforts to regulate new digital tools or artificial intelligence will fail if they continue to be rooted primarily, and at times exclusively, in concerns about expressive rights. We must expand our understanding of how these systems influence, interact with, and shift our individual and collective capacities to gather in physical and digital spaces, and factor that understanding into efforts to protect people and communities, be it through regulation, technology design, or community oversight.
As we were finalizing this volume, generative artificial intelligence was dominating public discussion, professional concerns, and regulatory attention on digital technologies. As it had with previous technologies, the media was abuzz with both the promise and the peril of new tools such as ChatGPT or those being built into products from Google, Microsoft, Facebook, and almost all enterprise-facing software. Regions such as the European Union were moving quickly to regulate artificial intelligence, while US legislators were waffling as they so often have when it comes to regulating technology. Industry insiders released several letters of caution, AI company CEOs rushed to present themselves before Congress and the White House as potential partners in regulation, and security experts sounded dire warnings of AI “hallucinations” (Kessler and Hsu 2023). Moments of regulatory opportunity do not come along as frequently as do new technologies. The rules that do result often tend to rigidify even as the technologies they are meant to control continue to embed themselves deeper and deeper into daily life, expanding their reach and impact with every new application.
Freed from the demands of reelection and profit, it is up to independent scholars and community experts to assess the broadest range of impacts these technologies have. Our contribution with this volume is consideration of their impact beyond the expression or privacy concerns they raise. How will artificial intelligence influence who meets whom, who can gather with whom, and where and how those people might congregate? How might artificial intelligence shift “what” participates in a gathering—will chatbots count? Will questions of timing and proximity, which have distinguished assembly from association to date, still matter in an era when evidence of an individual’s online action may be stored forever on corporate servers and repurposed for algorithmic training? Will brief and passing online actions take on permanent importance? Will nations come to require registration of informal, digital-first associations? As patterns of private participation are rendered permanent by digital storage options, what lies ahead for the ideas of anonymity, consent, or even personal reconsideration of prior actions? These are some of the questions that emerge when we consider how our practices of gathering for worship, education, political involvement, or community action evolve alongside the technology that enables them.