STANFORD
UNIVERSITY PRESS
  



Regulating Human Research
IRBs from Peer Review to Compliance Bureaucracy
Sarah Babb

BUY THIS BOOK


Introduction

“ALL I KNEW was that they just kept saying I had the bad blood—they never mentioned syphilis to me. Not even once,” recalled Charles Pollard, one of the last survivors of the infamous Tuskegee syphilis study. Like the other men in the decades-long study, Charles had been cruelly misinformed. Researchers told the men—all African American, and mostly poor and illiterate—that they were being treated for “bad blood.” In fact, they had unknowingly signed up for a study of the effects of untreated syphilis. When penicillin was found to be an effective cure, they were neither offered the drug nor told that they had the disease; some were even prevented from being treated. Instead, they were monitored for decades; when they died, their bodies were examined postmortem. The study had received millions of dollars in federal funding.1

It was public outrage over Tuskegee and other similarly horrifying abuses that led the U.S. Congress to pass the National Research Act in 1974. The act created an expert commission that would produce the Belmont Report, which laid out principles for the ethical treatment of human subjects. The report established that although biomedical studies could lead to lifesaving discoveries, they could not be allowed to violate the human rights of the people who participate in them. Studies should minimize the risk of harm to participants and strike a balance between risk and potential benefits. They should strive to ensure that subjects participate voluntarily, with a full understanding of the nature of the research, and only after being selected in a fair, nonexploitative manner. The Belmont principles remain the bedrock of human research ethics in the United States today.

Ethics are moral principles that guide behavior. Sometimes they provide clear answers about what we should and should not do—for example, there is no conceivable reading of the Belmont principles that could justify the Tuskegee study. In other cases, ethics provide parameters for thoughtful debates in which reasonable people can disagree. Should we allow a study in which there is a small risk of serious physical harm, but also a strong likelihood of lifesaving benefits? Should we be more worried about a small risk of serious harm or a large risk of a minor harm? These are among the many complex questions that must be considered when weighing the ethics of studies on human beings.

In contrast, regulations are government rules that require certain actions while prohibiting others. The same National Research Act that chartered the Belmont Report also authorized federal regulations. Their purpose was to provide a legal framework to protect human research subjects from ethical abuses. The principal requirement of these regulations was that federally funded research with human subjects be reviewed by committees known as Institutional Review Boards (IRBs).

Today, IRBs are best known for making ethical decisions based on the Belmont principles—for weighing research proposals to determine whether risks to human subjects are reasonable, and whether subjects are being provided with adequate opportunity to give their informed consent. Yet, in addition to making ethical judgments, IRBs are also charged with the less glamorous role of managing compliance with federal regulations.

This regulatory dimension first came to my attention back in 2009, when I began a three-year term on the Boston College IRB. As a faculty board member, I was charged with applying not only the Belmont ethics, but also a more perplexing set of guidelines. For example, there was a list of eight standard elements of informed consent, required by the regulations except when the researcher obtained either a waiver of one or more elements of informed consent or a waiver of documentation of informed consent. Each kind of waiver had a different list of similarly bewildering eligibility criteria. I remember feeling anxious the first time I was exposed to these regulatory minutiae—and hoping that there were others better qualified than I to remember and apply them.

As it turned out, I was not expected to master these important but confusing technicalities. Instead, my board colleagues and I regularly relied on IRB staff for guidance on regulatory matters. Over time, I came to understand that the image of IRBs as committees charged with weighing ethical dilemmas captured the tip of a much larger iceberg of activities. I could see that there was a more routine form of regulatory decision making that was important, but not widely understood or even acknowledged. My desire to understand it led me to the research that culminated in this book.

From Amateur Board to Compliance Bureaucracy

For historical reasons, IRBs resemble peer review committees. Most are located at research institutions, such as universities and academic medical centers, and are mostly composed of faculty volunteers who make ethical judgments based on their scholarly expertise.

Much of what has been written about these boards has focused on these panels of scholars making careful ethical decisions.2 What is frequently overlooked is that most IRB decisions today are not made by convened committees of academics at all. For example, the decision to approve the research for this book was made not by a faculty volunteer but by staff members in the IRB office, who reviewed my application for exemption. My informed consent form and verbal script were based on staff-designed templates. Had my research involved higher levels of risk, it would eventually have been discussed by faculty board members, but only after being revised in consultation with staff.

In fact, until about twenty years ago, IRBs could accurately be described as the faculty-run committees that remain in the popular imaginary today. They were typically managed by faculty chairpersons—usually uncompensated—with the assistance of a single clerical staff member. “The [faculty] chairman [sic] is probably the most important member on the IRB,” explained two Tufts biomedical researchers at a conference in 1980. “It is incumbent upon the chairman to be fully informed about the current status of the regulations and in turn educate the members of the IRB.”3

Sometime around the late 1990s, however, IRBs began their metamorphosis into something different. Precipitating the change was a new round of research scandals, which triggered a wave of federal enforcement actions. Regulators began to scrutinize IRB operations more closely, and disciplined a number of prominent research institutions. In response to this risky environment, these institutions began to invest in IRB administration. The trend was muted at liberal arts colleges, where there was little sponsored research to penalize. However, investment in IRB offices was quite rapid and pronounced at federally funded research universities and medical centers—and most especially at institutions with large amounts of federally sponsored biomedical research.

At these organizations, there was a startling increase in the number of staff: by 2007, more than half of respondents in a survey of the IRB world reported that their offices had three or more full-time staff members, with some reporting offices three times that size or more.4 Meanwhile, there was a significant upgrading in qualifications of staff members, an increasing proportion of whom had advanced postgraduate degrees. These were no longer secretaries working under the supervision of IRB faculty chairs, but rather research administrators, embedded in a chain of command reaching up to the highest level of administration, and with a growing sense of professional identity.

Where this transformation occurred, there was a rearrangement of decision making, as illustrated in figure I.1. In the old model, the main job of staff was to manage the paperwork; decisions were made by faculty volunteers. In the emerging new model, staff took charge of many important decisions, such as whether research qualified for exemption, or how investigators should modify their submission before bringing it to the board. More experienced and qualified staff became board members who could vote on the riskiest studies and also approve expedited protocols. Faculty volunteers continued to make up a majority on boards and to consider weighty ethical decisions. However, at most research institutions, decisions that required regulatory knowledge were turned over to staff.



FIGURE I.1. ​Two models of IRB decision making.


In this way, volunteer committees gave way to compliance bureaucracy. I do not use the term “bureaucracy” in the colloquial sense, with its inherently negative connotations of red tape and ineptitude. Rather, I wish to invoke the term as it was used by the German sociologist Max Weber, who thought that bureaucracy was a uniquely effective way of organizing work on a large scale. A bureaucratic system was based on written rules and records as well as a clear division of labor. The people who labored in bureaucracies were professionals—they were hired and promoted based on their expert qualifications and performance, and were paid a salary.

For Weber, bureaucracy was key to the flourishing of modern social life. It was particularly important to the rise of the modern nation-state. Aided by powerful bureaucratic machinery, states could develop modern militaries, taxation systems, social security administrations, and systems of regulation.

Yet IRBs are not government offices. With few exceptions, the people in charge of overseeing compliance with the regulations are not federal employees.5 In this book, I define a compliance bureaucracy as a nongovernmental office that uses skilled staff—compliance professionals—to interpret, apply, and oversee adherence to government rules. This book tells the story of how IRBs evolved from volunteer committees into compliance bureaucracies, and what some of the consequences have been.

Compliance Bureaucracy as Workaround

I was in the lobby of a sleek glass office building in Rockville, Maryland, trying to get my bearings. As I peered at the directory, I could see that there were many tenants. There were two wealth management firms, a company that specialized in human resources consulting, a health care technology company, and a medical office specializing in neurological diseases of the ear. There were also several satellite offices of the U.S. Department of Health and Human Services. One was the Office for Human Research Protections (OHRP), which occupied a single suite on the second floor—more than enough space for its twenty-two employees.

This small, unassuming office is responsible for overseeing more than ten thousand IRBs at research institutions across the United States, and in many other countries as well. Because the size of the office’s staff is miniscule in proportion to this jurisdiction, the office usually conducts an audit only when it learns of a problem. It lacks the authority to issue formal precedents, although it can issue “guidance,” as long as it does not stray too far from the original regulatory meaning.

In spite of OHRP’s apparent weakness, in some ways it is quite powerful. Hanging on its every word are many thousands of locally financed IRB offices, each with its own staff, policies, and procedures. OHRP occupies the apex of a regulatory pyramid, atop an enormous base of compliance bureaucracies. Sharing this top position are departments within the Food and Drug Administration charged with enforcing a separate set of IRB regulations. It is common for IRBs to follow both sets of rules.

This system exemplifies the quirkiness of American governance, which occurs through “an immensely complex tangle of indirect incentives, cross-cutting regulations, overlapping jurisdictions, delegated responsibility, and diffuse accountability.”6 Scholars have coined various terms that refer to different aspects of this phenomenon, including “delegated governance,” the “litigation state,” the “associational state,” and the “Rube Goldberg state.”7

For lack of a more comprehensive alternative, I have chosen the term “workaround state” to describe the dynamics I describe in this book. Its defining characteristic is the outsourcing of functions that in other industrial democracies are seen as the purview of central government. The American health care system delegates much of the job of insuring citizens to private firms and fifty state governments.8 Private companies are tasked with stabilizing the residential mortgage market; and, in the absence of a robust technocratic civil service, policy ideas are supplied by private think tanks.9 We have even embraced private prisons in our penal system.10

These and innumerable other examples of delegation can be seen as workarounds—alternative means to ends that the federal government cannot or will not pursue. They emerge because attempts to use federal power to pursue important policy goals are often thwarted by characteristically American policy obstacles, including underdeveloped administrative capacity, federalism, divided government, and antigovernment political ideology. These make it difficult for policy makers to overcome the opposition of organized interest groups, and lead to the emergence of workarounds.11 Sometimes workarounds represent a deliberate strategy for expanding state capacity while avoiding the political controversy and expense of big government.12 In other cases, they emerge organically to fill in the gaps left by the absence of government activity.13

One variety of workaround is compliance bureaucracy. Although the term “compliance bureaucracy” is my own, the phenomenon it represents is well known among organizational sociologists. Over the past several decades, American organizations have spun off a variety of subunits dedicated to managing compliance in diverse areas, such as health care privacy, financial services, and employment law.14 A key role of these offices is to make sense of government rules. In the United States, political and institutional limitations on state-building result in fragile and fragmented regulatory authority. Organizations receive weak, inconsistent, and confusing signals about what it means to comply. To adapt to this risky environment, they create specialized offices staffed by skilled workers.15

Significantly, compliance professionals do much more than merely advise organizations on how to follow government rules. They design, implement, and manage their own local systems of oversight, and enforcement, known as “compliance programs”; they also clarify the meaning of statutes and regulations by developing professional norms or “best practices”—standards that can spread nationwide, and even be endorsed by the government.16 In the United States, compliance bureaucracy is a powerful regulatory force in its own right. In other countries, where more capable state agencies exercise stronger oversight and send clearer signals, compliance bureaucracy is underdeveloped or absent; the staff members managing compliance are both fewer and less skilled, since their job is simply to follow the government’s instructions.17

The protection of human research subjects is a critically important area of government oversight, and is recognized as such by industrialized democracies around the world. All require human research proposals to be evaluated by special committees.18 These national systems delegate judgment to committees because ethics review is unamenable to micromanagement and blanket prohibitions: it must allow for thoughtful debates about intractable dilemmas in particular cases, and these must include the voices of experts who understand the research being proposed.

These fundamental similarities aside, what sets the American system apart is its extraordinary degree of delegation.19 In many wealthy democracies, research ethics committees are more directly managed by national governments. In both France and the United Kingdom, for example, a researcher initiating a clinical trial typically goes through a national application system and is assigned to a government-coordinated committee. Government offices emit and regularly revise extensive standards for how these bodies carry out their duties.20 By contrast, our American IRB system is more profoundly decentralized. Here, there are thousands of local boards, each with jurisdiction over its own local investigators, each developing its own policies and procedures, and all answerable to agencies that cannot engage in close monitoring, precedent-setting, or even regular policy updates. This provides fertile ground for the flourishing of compliance bureaucracy.

IRBs as a Special Case

Whether they deal with employment, privacy, or human research protections, compliance bureaucracies share a common vocation: the definition and application of federal rules. Yet IRBs are also unusual among compliance bureaucracies because for much of their history they were amateur-run, collegial bodies. In contrast, most other areas of compliance—such as privacy, financial services, and equal opportunity—were always the domain of paid staff.

Because IRBs had to become compliance bureaucracies, their history provides a unique perspective on what these bureaucracies do. Organizational sociologists have often emphasized the symbolic function of compliance bureaucracy, based on the example of civil rights law and equal opportunity (EEO) administration. The EEO case suggests that compliance offices exist not only to make sense of federal rules, but also to create “a visible commitment to law” to serve as “window dressing,” and to show that organizations are “doing [their] best to figure out how to comply.”21

Yet in the case of IRBs, this account leaves some important questions unanswered. For decades, faculty-run boards served as symbols of ethical and regulatory rectitude. The subsequent shift away from volunteer committees had a very high price tag: a 2007 survey found that the median annual cost of running an IRB at an academic medical center was $781,224, with staff salaries accounting for the biggest portion of the expense.22 Over time, a growing number of biomedical studies were outsourced to for-profit “independent IRBs,” which charged substantial fees. For example, in 2018 the largest of these for-profit boards charged $1,864 per study, with a surcharge of $1,076 per principal investigator, and additional fees levied for continuing review, extra consent documents, and protocol changes.23

What are organizations paying for when they invest in IRB offices or when they outsource compliance work to for-profit boards? I contend that to answer this question, we need to take seriously the classical Weberian view of bureaucracy’s “purely technical superiority” over other forms of organizing work.24 This technical role has not been emphasized in literature on compliance offices, most of which is based on equal opportunity and compliance with civil rights law.

The focus on EEO has left important variations across compliance fields unexplored. EEO offices occupy a field in which the boundaries of compliance are largely defined by courts in lawsuits, and where federal rules require relatively few tangible outputs.25 The mechanism underlying compliance is the private right of action embedded in the Civil Rights Act of 1964, which allows individuals and public interest groups to sue for damages and to recoup legal costs if they win. If organizations want clarification on the meaning of compliance, they can look to the outcomes of legal battles as a guide.26

However, the National Research Act—the statute authorizing the IRB regulations—does not contain a private right of action, and IRB-related lawsuits are rare.27 Instead, compliance is assessed in audits—formal investigations conducted by federal regulators.28 Because auditing agencies are not empowered to set ethical precedents, they mainly assess whether IRBs have been upholding the administrative procedures that the regulations require—whether decisions are made by duly constituted committees, having considered the mandated criteria, having followed local policies, and so on.29 In an audit, an IRB office will need to provide evidence that all the requisite procedures were followed each and every time a decision was made. To this end, they must provide auditors with voluminous, meticulously recorded documentation. Federal agencies may be understaffed, and the possibility of an audit remote; yet in the unlikely event of such an audit, failure to produce the proper documentation could get an organization in serious trouble.30

The central role of the audit in IRB compliance creates two sorts of technical problem—each better addressed by bureaucracy than by the labor of volunteers. The first is ensuring that complex procedural rules get followed to the letter and scrupulously documented. Faculty volunteers are not very good at attending to such minutiae. Even if we are interested in the regulations, we usually lack the time to master them, to oversee their routine implementation, and to create a scrupulous paper trail.

In contrast, bureaucracy is ideally suited to the painstaking work of mass-producing auditable compliance. As Weber observed long ago, bureaucracy demands “the full working capacity” of its officials, who “by constant practice [increase] their expertise.”31 Paid, full-time administrators can give their undivided attention to the technical features of the rules, and thereby ensure the satisfaction of auditing regulators. Such attention to detail is particularly important where different government agencies have overlapping but not entirely consistent requirements, creating additional layers of complexity that demand workers’ dedicated concentration.

The second technical problem addressed by bureaucracy is the high cost of compliance. Following complex procedural rules and mass-producing auditable documents can interfere significantly with valued activities and can generate large financial expenses. Bureaucracy, although not popularly known for its cost-saving qualities, can be quite effective at controlling, standardizing, and routinizing human behavior to make processes faster and less time-consuming. These advantages make bureaucracy an invaluable tool for controlling the cost of compliance.

In this book, I argue that human research protection bureaucracies supplanted amateur IRBs both because they could make sense of the rules and because they were better equipped to manage the demands of auditable compliance. They could meet regulators’ demand for precisely recorded, auditable indicators, while addressing organizations’ need for efficiency, as compliance became increasingly intrusive and expensive. This evolution from amateur board to compliance bureaucracy had many consequences, both intended and unintended, which I describe in the chapters that follow.

Plan of the Book

The book is based on a wide variety of sources, including interviews with more than fifty individuals in and around the IRB world. Most were either current or former IRB administrators working on the front lines of regulatory compliance; others included former regulators, consultants, and faculty IRB members. My informants are described in greater detail in the appendix.

In chapter 1, I trace the origins and demise of a period I call the “era of approximate compliance,” which lasted until roughly the late 1990s. During this time, IRBs were typically run by faculty volunteers who, while taking their ethical duties seriously, often paid little attention to the letter of the regulations. The regulatory system left ethical decisions to local boards and relied on the labor of unpaid faculty volunteers. It was overseen by federal offices with limited authority and resources. As the world of biomedical research became larger, more commercialized, and more complex, this framework became increasingly inadequate and out of date, creating the conditions for an outbreak of research scandals—and for a disciplinary crackdown on research institutions.

In chapter 2, I describe the circumstances that gave rise to the IRB profession, a new category of expert worker. By sanctioning institutions for noncompliance—while failing to fully define what it meant to comply—federal authorities created high levels of uncertainty. Research institutions responded in two ways. First, they adopted the most conservative reading of the regulations, thereby launching an “era of hypercompliance.” Second, they hired skilled staff to interpret and apply these regulations, leading to the emergence of a nationwide human research protection profession.

In chapter 3, I show how IRB offices responded to powerful pressures to become more efficient. During the era of hypercompliance, the IRB review process came to pose an unacceptable obstruction to the biomedical research enterprise. In response, IRB offices deployed tools of bureaucratic administration to lower the cost of compliance. The rationalization of these offices definitively shifted the locus of decision making from faculty volunteers to full-time administrators and was the defining characteristic of the “era of compliance with efficiency,” which began toward the middle of the first decade of the 2000s. This reorientation had unintended consequences, including frictions around the exercise of bureaucratic authority over research design, and goal displacement.

In chapter 4, I show how the IRB world came to adopt the dynamics, practices, and rhetoric of a private industry. This trend was uneven, and most visible in independent IRBs: boards run as for-profit enterprises, and mostly reviewing privately sponsored biomedical research. Yet standards set in the most industrialized sector spread throughout the human research protections world, fueled by the forces of market competition, private accreditation, and professionals’ inclination to borrow widely accepted best practices.

In chapter 5, I analyze the expansion of IRB oversight to social and humanities research. With the federal crackdown, these researchers suddenly became enmeshed in a regulatory system designed around the routines of biomedical and other experimental studies. The shift was produced not by new rules, but by changed interpretations of these rules by local institutions during the era of hypercompliance. These interpretations pulled unfunded and exempt research projects into the orbit of the regulations, filtered through their most conservative reading. Later, however, a social movement among IRB and other research administrators promoted a more flexible approach, providing some researchers in disciplines like sociology and anthropology with much-needed relief.

Chapter 6 compares across three different varieties of compliance bureaucracy: EEO, IRB, and financial services. In all three fields, organizations hired compliance professionals to help them conform to complex and ambiguous rules. However, there was one revealing difference. Both IRB and financial services compliance offices came to embrace efficiency goals, as exemplified by the widespread use of compliance software and outsourcing to external vendors. In contrast, EEO offices did not adopt efficiency-enhancing innovations. I argue that this difference can be attributed to distinct varieties of compliance: those defined by a logic of confidence, assessed by courts that reward recognizable gestures of good faith; and those governed by a logic of auditability, assessed in regulatory inspections that place a premium on meticulously recorded procedural details. The relatively high cost of the latter creates efficiency pressures, which are reflected in the rhetoric and norms of compliance professionals.

In the conclusion, I revisit the major findings of the book, discuss the 2018 revisions to federal regulations governing IRBs, and contemplate the American model in the context of other national systems. Although the new regulations contained some significant changes, they did not represent a major departure from the system’s workaround logic. There are numerous benefits to having a more centralized government role in human research protections, as illustrated by the British case. Yet in the American context, there may also be some advantages to a privatized system that provides protections beyond the reach of antiregulatory politics.

Notes

1. DeNeen L. Brown, “‘You’ve Got Bad Blood’: The Horror of the Tuskegee Syphilis Experiment,” Washington Post, May 16, 2017, https://www.washingtonpost.com/news/retropolis/wp/2017/05/16/youve-got-bad-blood-the-horror-of-the-tuskegee-syphilis-experiment/?utm_term=.2e00b1414b2f.

2. Carl Schneider, The Censor’s Hand: The Misregulation of Human-Subject Research (New York: New York University Press, 2015); Robert Klitzman, The Ethics Police? The Struggle to Make Human Research Safe (Oxford: Oxford University Press, 2015); Laura Abbott and Christine Grady, “A Systematic Review of the Empirical Literature Evaluating IRBs: What We Know and What We Still Need to Learn,” Journal of Empirical Research on Human Research Ethics 6 (2011): 3–20; Malcolm M. Feeley, “Legality, Social Research, and the Challenge of Institutional Review Boards,” Law & Society Review 41, no. 4 (2007): 757–76; Maureen H. Fitzgerald, “Punctuated Equilibrium, Moral Panics and the Ethics Review Process,” Journal of Academic Ethics 2 (2004): 315–38; Tara Star Johnson, “Qualitative Research in Question: A Narrative of Disciplinary Power with/in the IRB,” Qualitative Inquiry 14, no. 2 (2008): 212–32; Caroline H. Bledsoe et al., “Regulating Creativity: Research and Survival in the IRB Iron Cage,” Northwestern University Law Review 101, no. 2 (2007): 593–641; Laura Jeanine Morris Stark, Behind Closed Doors: IRBs and the Making of Ethical Research (Chicago: University of Chicago Press, 2012), 229.

3. Jeanne L. Speckman et al., “Determining the Costs of Institutional Review Boards,” IRB: Ethics and Human Research 29, no. 2 (2007): 7–13.

4. Public Responsibility in Medicine and Research (PRIM&R), “Workload and Salary Survey, 2007,” https://www.primr.org/wlss/ (accessed June 29, 2017).

5. These exceptions include employees of IRBs located within federal agencies such as the U.S. Department of Veterans Affairs or the National Institutes of Health. However, these individuals act as agents of regulated organizations rather than as regulatory officials.

6. Elisabeth S. Clemens, “Lineages of the Rube Goldberg State: Building and Blurring Public Programs, 1900–1940,” in Rethinking Political Institutions: The Art of the State, ed. Stephen Skowronek and Daniel Galvin (New York: New York University Press, 2006), 188.

7. Brian Balogh, The Associational State: American Governance in the Twentieth Century (Philadelphia: University of Pennsylvania Press, 2015); Clemens, “Lineages of the Rube Goldberg State”; Sean Farhang, The Litigation State: Public Regulation and Private Lawsuits in the United States (Princeton, NJ: Princeton University Press, 2010).

8. Kimberly J. Morgan and Andrea Louise Campbell, “Delegated Governance in the Affordable Care Act,” Journal of Health Politics, Policy and Law 36, no. 3 (2011): 387–91; Paul Starr, Remedy and Reaction: The Peculiar American Struggle over Health Care Reform (New Haven, CT: Yale University Press, 2011).

9. Thomas Medvetz, Think Tanks in America (Chicago: University of Chicago Press, 2012); Sarah Lehman Quinn, “Government Policy, Housing, and the Origins of Securitization, 1780–1968” (PhD diss., University of California, Berkeley, 2010).

10. Byron E. Price and Norma M. Riccucci, “Exploring the Determinants of Decisions to Privatize State Prisons,” American Review of Public Administration 35, no. 3 (2005): 223–35.

11. Andrea Louise Campbell and Kimberly J. Morgan, The Delegated Welfare State: Medicare, Markets, and the Governance of Social Policy (New York: Oxford University Press, 2011); Clemens, “Lineages of the Rube Goldberg State”; Quinn, “Government Policy, Housing, and the Origins of Securitization.”

12. Clemens, “Lineages of the Rube Goldberg State,” 189; Campbell and Morgan, The Delegated Welfare State; Colin D. Moore, “State Building through Partnership: Delegation, Public-Private Partnerships, and the Political Development of American Imperialism, 1898–1916,” Studies in American Political Development 25, no. 1 (2011): 27–55.

13. Lauren B. Edelman, “Legal Ambiguity and Symbolic Structures: Organizational Mediation of Civil Rights Law,” American Journal of Sociology 97, no. 6 (1992): 1531–76; Frank Dobbin and John R. Sutton, “The Rights Revolution and the Rise of Human Resources Management Divisions,” American Journal of Sociology 104, no. 2 (1998): 441–76.

14. Elizabeth Brennan, “Constructing Risk and Legitimizing Place: Privacy Professionals’ Interpretation and Implementation of HIPAA in Hospitals” (paper presented at the annual meetings of the American Sociological Association, Seattle, WA, August 2016); Edelman, “Legal Ambiguity and Symbolic Structures”; Dobbin and Sutton, “The Rights Revolution”; Frank Dobbin, Inventing Equal Opportunity (Princeton, NJ: Princeton University Press, 2009). For a general discussion of bureaucracy as an organizational response to regulation, see Ruthanne Huising and Susan S. Silbey, “From Nudge to Culture and Back Again,” Annual Review of Law and Social Science 14 (2018): 91–114.

15. Edelman, “Legal Ambiguity and Symbolic Structures,” 1537; Dobbin, Inventing Equal Opportunity.

16. Dobbin, Inventing Equal Opportunity; Lauren B. Edelman et al., “When Organizations Rule: Judicial Deference to Institutionalized Employment Structures,” American Journal of Sociology 117, no. 3 (2011): 888–954; C. Elizabeth Hirsh, “The Strength of Weak Enforcement: The Impact of Discrimination Charges, Legal Environments, and Organizational Conditions on Workplace Segregation,” American Sociological Review 74, no. 2 (2009): 245–71.

17. For an analysis of privacy officers in the United States, Canada, and France, see Kartikeya Bajpai, “Cross-National Variation in Occupational Prestige,” Academy of Management Proceedings 2017, no. 1 (2017): 1.

18. Maureen H. Fitzgerald and Paul A. Phillips, “Centralized and Non-centralized Ethics Review: A Five Nation Study,” Accountability in Research 13, no. 1 (2006): 47–74; Rustam Al-Shahi Salman et al., “Increasing Value and Reducing Waste in Biomedical Research Regulation and Management,” The Lancet 383, no. 9912 (2014): 176–85; Stuart G. Nicholls et al., “Call for a Pan-Canadian Approach to Ethics Review in Canada,” Canadian Medical Association Journal 190, no. 18 (2018): E553–55.

19. The Canadian and Australian systems are the ones most similar to the American IRB framework. Fitzgerald and Phillips, “Centralized and Non-centralized Ethics Review,” 47–74.

20. A. Hedgecoe et al., “Research Ethics Committees in Europe: Implementing the Directive, Respecting Diversity,” Journal of Medical Ethics 32, no. 8 (August 2006): 483–86; Delphine Stoffel et al., Ethics Assessment in Different Countries: France (European Commission, 2015), http://satoriproject.eu/media/4.d-Country-report-France.pdf.

21. Edelman, “Legal Ambiguity and Symbolic Structures,” 1542; Alexandra Kalev, Frank Dobbin, and Erin Kelly, “Best Practices or Best Guesses? Assessing the Efficacy of Corporate Affirmative Action and Diversity Policies,” American Sociological Review 71, no. 4 (2006): 589–617; Dobbin, Inventing Equal Opportunity, 86.

22. Jeanne L. Speckman et al., “Determining the Costs of Institutional Review Boards,” IRB: Ethics & Human Research 29, no. 2 (2007): 7–13.

23. WIRB-Copernicus Group, “2018 Single Review Service Fee Schedule” (Princeton, NJ: WIRB-Copernicus, 2018).

24. Max Weber, Economy and Society: An Outline of Interpretive Sociology, trans. Guenther Roth and Claus Wittich (Berkeley: University of California Press, 1978), 973.

25. Edelman, “Legal Ambiguity and Symbolic Structures.”

26. Dobbin, Inventing Equal Opportunity.

27. Sharona Hoffman and Jessica Wilen Berg, “The Suitability of IRB Liability,” University of Pittsburgh Law Review 67 (2005): 365.

28. Following Michael Power, I use the term “audit” in the broad sense, to refer to any formal assessment by an independent body to hold an organization accountable. See Michael Power, The Audit Society: Rituals of Verification (New York: Oxford University Press, 1997).

29. Kristina Borror et al., “A Review of OHRP Compliance Oversight Letters,” IRB: Ethics and Human Research 25, no. 5 (2003): 1–4; Scott Burris and Jen Welsh, “Regulatory Paradox: A Review of Enforcement Letters Issued by the Office for Human Research Protection,” Northwestern University Law Review 101, no. 2 (2007): 643.

30. Bledsoe et al., “Regulating Creativity.”

31. Weber, Economy and Society, 975.