In 2004, the US-led coalition in Iraq initiated two campaigns to capture the city of Fallujah. In April, the forces tried to occupy the city. However, because of international and domestic pressures based mainly on criticism of civilian fatalities, the operation was halted after three weeks, before the goals were attained. In November and December, the second campaign was launched. This time, having learned the lessons of the first assault and in an effort to limit criticism of overaggressiveness, the command planned a slower operation with strict rules of engagement that forbade targeting or striking any noncombatants except in self-defense. Nevertheless, relative to the first round, the operation ended with more fatalities among Iraqi civilians and heavy destruction of the city.
In December 2008–January 2009, Israel launched Operation Cast Lead against the Hamas-ruled Gaza Strip, inflicting unprecedentedly heavy losses on Gazan civilians. In July–August 2014, Israel launched Operation Protective Edge, once again against Hamas-ruled Gaza. Israel, like its American counterpart, was influenced by international criticism that followed the previous operation, which culminated in a United Nations investigation accusing Israel of committing crimes against humanity. Initially, it was more cautious and restrained when it came to harming the civilian population collaterally; however, again like that of its American counterpart, this operation ended with a higher number of civilian fatalities in Gaza than the first one. In both cases, the rules of engagement were gradually loosening.
Although this book is more theoretically than empirically motivated, these kinds of shifts illuminate one of the interesting phenomena of the new type of war: civilians are collaterally targeted. This happens even when the targeting contradicts the initial, and surely formal, intentions as policies change dynamically during the operation. It is one illustration of the topics with which this book engages: hierarchies of risk and death.
What determines the value of life? Max Weber (1958) explained that the modern state has a monopoly on the legitimate use of violence, meaning also that the state manages life and death, owing to which it could pacify its internal control; violence is “only indirectly the resource whereby those who rule sustain their ‘government’” (Giddens 1985, 4). The state has the authority to determine who will be sacrificed as soldiers protecting civilians, who will be sacrificed as civilians because the price of protecting them is too high, and, of course, who will be killed as an enemy while the state defends its own civilians.
Within this project of managing life and death, democratic states must deal with the dilemma of how to balance three conflicting imperatives: (1) protection of one’s own civilians, based on the principles of the state’s sovereignty according to which societal actors grant the state internal authority in return for citizenship rights, among which protection from external enemies is paramount (Giddens 1985); (2) international humanitarian law (IHL), which prescribes the protection of noncombatant immunity (Henckaerts and Doswald-Beck 2005); and (3) the increasing aversion to sacrificing one’s own soldiers’ lives that emerged in the post–Cold War era, which is partly captured by the casualty sensitivity syndrome and promoted by the liberal discourse of citizenship (H. Smith 2005).
Managing these dilemmas has resulted in hierarchies of risk and, ultimately, of death, especially as nonstate adversaries exacerbate the tension inherent in these norms (Kaempf 2018, 248). To clarify, the term hierarchy can be defined as a form of differentiation whereby groups can be categorized and ranked from higher to lower according to power (Lake 2009, 264)—in this case, the power that determines the level of the group’s exposure to risk.
Aversion to sacrifice may shift the risk of warfare away from one’s own soldiers to enemy noncombatants by increasing the use of excessive lethality, but then contradict the imperative of noncombatant immunity. If aversion to sacrifice restrains the state from risking its soldiers to protect its civilians, the imperative of security is compromised. Risking one’s own soldiers may uphold the principles of sovereignty and immunity but may be at odds with the aversion to risk.
In short, the crux of my thesis is that states develop a death hierarchy—an ordered scale of value that they apply to the lives of their soldiers relative to the lives of their civilians and enemy noncombatants. Positions on the hierarchy are mutually exclusive—different groups are not exposed to the same risk levels simultaneously—but variations in the risk level of one group affect the others.
Studies have already recognized such hierarchies of risk and death by showing that states expose their own civilians, soldiers, and enemy noncombatants to varying levels of risk. Examples of these variances are evident. First, in a conscription military, the upper-middle class is affected more than it is in a volunteer force, where soldiers drawn from the lower classes, including minorities and immigrants, are put at greater risk (for example, Kriner and Shen 2010). Second, given that class, gender, race, and ethnic origin affect the way the state implements the model of human resources in the military (for example, Krebs 2006), different groups are differentially exposed to potential risk. Third, given that the shift in democracies from a labor-intensive to a capital-intensive military substitutes firepower for labor, among other reasons to avoid casualties by distancing soldiers from the theater of conflict (Caverley 2014), it follows that enemy noncombatants may be placed at greater risk in urban wars by being subjected to more precise but also more aggressive firepower. Risk is thereby transferred from one’s own soldiers to enemy noncombatants (Shaw 2005). Fourth, casualty sensitivity affects troop deployment (Horowitz and Levendusky 2011; Vasquez 2005), while military missions seek to minimize the risk posed to their own civilians (Edmunds 2012). Thus, when casualty aversion restricts deployment and international constraints limit the option of transferring the risk, the state may compromise its original security interests by avoiding deployment, even risking its own civilians. Thus, it values its own soldiers over its civilians.
Although these studies acknowledge the extent to which different groups are exposed to different levels of threat, there are two major gaps: analytical and methodological.
Analytically, previous studies have not explained how and why hierarchies are modified in general, and sometimes even dynamically, during the same deployment. In particular, studies indicating the shift of risk from one’s own soldiers to enemy noncombatants have recognized the existence of a trade-off between aggressiveness toward enemy noncombatants and reducing casualties among one’s own soldiers. Scholars thus have identified a hierarchy resulting from policy decisions, but they have left gaps. Martin Shaw’s (2005) influential analysis about the transfer of risk explains the origins of the trade-offs by relating them to the aversion to sacrifice. Nonetheless, in his analysis, this aversion is more static than dynamic. Thus, he leaves little room for understanding variations in risk transfer. Such variations may also inhere in the extent to which different groups are exposed to different degrees of risk, thereby modifying the death hierarchies (see also Miller 2000).
In a similar vein, other scholars have presented the operational conditions for the development of this trade-off (Kaempf 2018; Smith 2017) and the role of technology in encouraging it (Cronin 2018; Ignatieff 2000; Rasmussen 2006). They have also identified the causal chain that produces the resulting collateral deaths (Crawford 2013) and assessed its legality by highlighting the limits of IHL (Crawford 2013; Smith 2017). Overall, as in the case of Shaw, what is missing are explanations about how the trade-off plays out in different situations with the resulting variations in the death hierarchy and its determinants.
Consider, for example, the most recent relevant study to which my own corresponds. Neta Crawford’s Accountability for Killing: Moral Responsibility for Collateral Damage in America’s Post-9/11 Wars (2013) focuses on the causes of foreseeable collateral damage produced by the US military in Iraq, Afghanistan, and Pakistan. She substantiated the argument about the moral responsibility of the military for systemic collateral harming of civilians and developed a new theory of organizational moral agency and responsibility. However, she described the changes by analyzing the causal chain that repeatedly produced collateral damage rather than explaining the causes of these changes outside this loop. She assigned moral responsibility to the military and its civilian supervisors while giving less weight to the political-cultural environment within which policymaking takes place and policies are affected.
Similarly, expanding Crawford’s study, Bruce Cronin’s Bugsplat: The Politics of Collateral Damage in Western Armed Conflicts (2018) related collateral killing to the Western method of warfare that powerfully and recklessly targets the adversary’s national power and thereby inevitably produces collateral damage. However, like Crawford, he ignored the political-cultural environment within which this method is developed, and left less space for explaining variations in the implementation of this method.
In my previous book, Israel’s Death Hierarchy: Casualty Aversion in a Modern Militarized Democracy (2012), I took this analysis one step further by depicting death hierarchies and identifying their determinants. However, my study was limited to the Israeli arena and therefore refrained from developing the theory further and gaining the benefit of a more comparative perspective.
To highlight this scholarly gap from a different angle, the abundant literature concerning casualty sensitivity and the resulting policy outcome of casualty aversion explains modifications in the level of this tolerance of casualties (for example, see Gelpi, Feaver, and Reifler 2009; see Levy 2015a for a mapping of the literature). Nevertheless, there is no analysis of how variations in this tolerance are translated into military policies, thereby differentially affecting the state’s own civilians and soldiers and the enemy noncombatants. Casualty shyness may generate risk aversion, which reduces the level of protection provided to a state’s own civilians, but it may also promote force protection as well, which may shift the risk from one’s own soldiers to enemy noncombatants.
This book aims to address these gaps, at least in part, by analyzing the hierarchies of risk and death of the state’s own soldiers versus its own civilians and enemy noncombatants and by explaining the determinants of those hierarchies. The book also shows how democracies balance conflicting imperatives by shaping and reshaping the hierarchies of risk and death. Once we analyze the hierarchy of risk, we can then ask why comparable democracies create different hierarchies under similar military circumstances.
This analysis, however, cannot be accomplished without addressing the second gap, the methodological one: How can we know that the hierarchies really have been modified? Governments do not transparently report their priorities regarding which groups should face greater risks. Nevertheless, there are several indications. For example, changes in recruitment policies indicate variations in risking different social groups, as research mapping the social origins of fallen soldiers shows (for example, see Kriner and Shen 2010). Overt practices of risk aversion, such as force protection, also reveal priorities of risk and protection.
More complicated is the measurement of risk transfer, a central theme in the analysis of how death hierarchies are dynamically reshaped. Nevertheless, variations in risk transfer have not been sufficiently measured. One problem is found in studies of risk transfer unsupported by fatality ratios. For example, while Thomas Smith (2008) identified variations in the level of aggressiveness that American troops in Iraq used and provided a clear analysis of how tactical practices changed, he did not support this by showing changes in the fatality ratios. After all, it is possible that policies, such as the rules of engagement, were formally modified but not reflected in practices on the ground (see also Kaempf’s 2018 analysis of the interactive dynamics between the dilemmas of protecting own soldiers versus enemy civilians). Another example is Kahl’s (2007, 8) analysis of US conduct in Iraq. He concluded that the US military “has done a better job of respecting noncombatant immunity in Iraq than is commonly thought” relative to past wars. Nevertheless, this conclusion is also not supported by fatality ratios. Furthermore, the comparison is between different wars in different historical, political, and technological contexts. Unlike these writers, Martin Shaw (2005, 10) did provide the ratios, but he used them as a means of comparing different wars, and it is doubtful that these wars, with their different goals and terrains, are comparable (Cronin did the same, 2018, 270).
Without substantiating the impact of fire policies on risking one’s own soldiers versus enemy noncombatants, one may draw unsubstantiated conclusions, as did Kahl (2007). (On this problem, see Dill 2015, 253–255.) This conclusion, however, is part of the trend that legitimizes the Western way of war as more surgical due to its reliance on precision-guided munitions that reduce collateral killing, as the advocates of the Revolution in Military Affairs (RMA)—the innovative application of new technologies combined with changes in military doctrine—often claim (Waxman 2000; for criticism, see Shaw 2005, 87–88, and Zehfuss 2011).
Combining these writers’ insights, I have validated the existence of the force/casualty trade-off by presenting variations in the fatality ratios in the same arena, the Israel-Gaza wars of 1987–2009 (Levy 2012, 147–180). However, while the fatality ratio between a state’s own soldiers and enemy noncombatants is an effective tool to test the trade-off and monitor its variations, there is a second problem: even when arena-related variables such as geopolitics, the identity of the protagonists, and the nature of the battlefield can be controlled (as evident in the case study of Gaza), changes in the fatality ratio between one’s own soldiers and enemy noncombatants do not necessarily signify variations in the scale of risk transfer. This fatality ratio may change because of factors unrelated to risk transfer, such as the nature of the mission, its goals, or changes in the enemy’s capabilities, a gap that appeared in comparative studies as well (for example, Freeman and Levy 2016). Therefore, more ratios should be considered to assess whether risk really was transferred.
But here is the third problem: legitimization of using force also entails governments’ efforts to reduce the number of reported noncombatant fatalities. Such efforts provoke disputes over body counts between governments and nongovernmental organizations (NGOs). Discrepancies in fatality reports are a typical issue when it comes to battles between Western and non-Western rivals, with methodological (and, of course, political) implications (see Crawford 2013, 137–142). Such differences may mirror different perspectives about the applicability of international law and the discrimination it prescribes between combatants and noncombatants, namely, who can be considered a noncombatant. Hence, such differences may also affect the narrative of the conflict. At the bottom line, “those who are able to control the production of numbers control the public discourse and policy debates” (Aronson 2013, 30). To address these challenges, I offer tools to identify variations in death hierarchies.
Conclusions about the extent to which Western states respect noncombatant immunity thus have policy implications. If we can measure variations in risk transfer, we can also identify the conditions for determining that risk has been transferred. In other words, we can determine whether a smaller number of enemy noncombatants could have been harmed had the military taken, rather than transferred, a greater amount of risk. Risk is always partly shifted. Soldiers are not expected to take unconditional risks to save noncombatants’ lives. However, it is a matter of degree, so the challenge is to measure the amount of risk transferred over time and the policies that generated the decisions to do so.
The argument I put forward in this book is that states develop a death hierarchy: an ordered scale of value that they apply to the lives of their soldiers relative to the lives of their civilians and enemy noncombatants. Positions on the hierarchy are mutually exclusive—different groups are not exposed to the same risk levels simultaneously—but variations in the risk level of one group affect the others. The death hierarchy exposes different groups to varying degrees of risk by management of the use of force as determined by military variables such as operational priorities, the specifics of the terrain, and the nature of the adversary. The operation of these military variables, however, is mediated by the interaction between two sets of legitimacies: the legitimacy of using force and the legitimacy of sacrificing. In both legitimacies, the focus is on how state leaders internalize the level of legitimacy and translate it into freedom of action to risk their own soldiers versus their own civilians and enemy noncombatants. Put differently, the interplay results in state leaders balancing the three underlying conflicting imperatives—providing security, noncombatant immunity, and casualty sensitivity—in a manner that produces the hierarchies.
The interaction between the legitimacies affects military policies such as deployment, armaments and technology, tactics, fire policies and operational style. By implication, military policies give rise to four principal variations of placement on the death hierarchy. These variations reflect the categories of risk distribution by identifying who is risked the most: one’s own upper-middle-class soldiers (those drawn mainly from educated, white-collar families), one’s own lower-class soldiers (mainly the lower-middle class, immigrants, and ethnic and racial minorities), one’s own civilians, and enemy noncombatants (those residing among the enemy but not participating directly in the fighting and therefore shielded by IHL). As positions on the hierarchy are mutually exclusive and variations in the level of risk of one group affect that of the others, when a specific group is risked the most, it assumes the risk to which other groups would otherwise have been exposed. Therefore, a group’s risk is assessed relative to other groups.
It is worth clarifying from the outset that this book takes an explanatory rather than a normative perspective. It explains the hierarchical outcomes of policies by focusing on how states actually balance the risks of death rather than how they should do so, a question that lies at the heart of IHL and just war theory (see Walzer 2004). Whether risk should be transferred is indeed a moral question (Crawford 2015, 61–62), relating to the extent to which all lives are equally worthy of protection (Luban 2011, 12). So too is the question of whether armed forces should expose all soldiers, regardless of class, to the same risk. However, these questions are beyond the scope of my study. Equal risking is not a theoretical impossibility, but it is also not the reality, although it is evident that in the post–Cold War era, industrialized democracies are less aggressive than in past modern wars, especially in terms of noncombatant fatalities. But power is distributed unequally and thus affects different groups’ ability to increase or decrease their exposure to risk; this book aims at explaining the determinants of this unequal exposure.
Chapter 2 elaborates on this theoretical argument and introduces the determinants of the death hierarchy by analyzing the interaction between the legitimacies. Chapter 3 develops the tools used to assess the situations in which the state’s own soldiers are risked in order to protect enemy noncombatants or, conversely, risk is transferred from soldiers to noncombatants. I maintain that to identify such variations by measuring their impact, a combination of three to four categories of fatality ratios should be considered, factoring in the state’s own combatants and noncombatants versus enemy combatants and noncombatants. Then the association between ratios and practices is established to test the extent to which the ratios are mirrored in practices on the ground and variations in practices are reflected in variations in the ratios.
Empirically, I argue that when the perceived need for sacrifice bore a high legitimacy, states risked the lives of soldiers from the upper-middle class; this has been the core of the contract between a soldier and the state since the French Revolution. When that legitimacy assumed a more moderate level (in the post–World War II era), states modified the makeup of the ranks and increasingly risked the lives of lower-class soldiers who had hitherto been partly excluded from combat roles. In both variations, those groups were risked regardless of the legitimacy of using force, but provided that it was high enough to initiate the use of force. Chapter 4 focuses on these two traditional variations of risking one’s own soldiers: risking upper-middle-class soldiers is demonstrated mainly by Israel’s conscript military in its warfare in the West Bank in 2002, and risking lower-class soldiers is exemplified by the British Operation Sinbad in the Iraqi city of Basra in 2006–2007.
In the post–Cold War era, the legitimacy of sacrificing declined in industrialized democracies following the rise of individualism, liberalization, and the market society and the decline in the perception of external threats. Then the casualty sensitivity syndrome emerged. States pursued two options. If the legitimacy of using force was low, they opted for nonmilitary, risk-averse options, even if they compromised the security of the state’s civilians. Chapter 5 exemplifies this interaction between the legitimacies by analyzing policies of passive force protection, taking two forms: risk aversion of the US troops in Iraq, mainly between 2004 and 2006, and Israel’s mission aversion in Gaza in 2008. Alternatively, if the legitimacy of using force could be increased enough to justify aggressiveness, the states shifted the risk from their own soldiers to enemy noncombatants. Risk can be shifted strategically and tactically. It is strategically transferred when decision makers refrain from deploying ground forces and therefore use standoff weapons alone, and at a level of intensity that will spare the need for ground troops but nevertheless increase the risks posed to enemy civilians.
Chapter 6 opens with a discussion of the tensions and dilemmas inherent in the inclination to shift the risk from one’s own soldiers to enemy noncombatants. It proceeds with an analysis of the strategic form of risk transfer as exemplified by the classic case of the Kosovo War of 1999. The chapter demonstrates how the interplay between the two legitimacies shifted during the war, allowing the North Atlantic Treaty Organization (NATO) to escalate its airstrikes on Serbia, hence escalating the collateral killing of noncombatants. To help validate the insights from this analysis, I briefly analyze the US drone war in Pakistan, ongoing since 2004.
Risk is tactically shifted when decision makers deploy ground forces but also adopt tactics that shift at least part of the risk from own soldiers to enemy noncombatants. Excessive lethality is thus used through artillery, aircraft, drones, and other means, sometimes with relatively limited discrimination between combatants and noncombatants. Chapter 7 focuses on this more complicated variation. Two sets of cases illustrate tactical risk transfer: the two US-led campaigns in the Iraqi city of Fallujah in 2004 and Israel’s campaigns in Gaza—Summer Rains (2006), Cast Lead (2008–2009), and Protective Edge (2014). Importantly, the legitimacies change dynamically and may alter the hierarchies during the same campaign, often following initial achievements, failures, and obstacles, especially those that are unforeseen.
Nevertheless, recognizing that harming noncombatants may provide less, rather than more, protection to their own soldiers and may even thwart the mission, policymakers modified the death hierarchy and increased the risk to their own soldiers again. Chapter 8 uses the cases of the US-led surge in Iraq (2007–2008) and the NATO campaign to capture the Afghan city of Marjah (2010) to illustrate the situation of risking one’s own soldiers again. However, despite the initial expectations, the surge in Iraq was ultimately a case of risk transfer.
Chapter 9 offers a few concluding remarks about the interaction of the legitimacies and a detailed analysis of the inferences yielded by the combination of the categories of fatality ratios to summarize the methodological aspects of the study.
Methodologically, to demonstrate variations of the death hierarchy and explain their determinants, I focus on the cases of the United States, Great Britain, and Israel. All are democracies enmeshed in prolonged warfare, and by implication, they also deal with domestic constraints related to the legitimacy of using force and the legitimacy of sacrificing, and the military policies created by those constraints. I use specific campaigns that are similar in nature but demonstrate different death hierarchies (see section 2.4 and Chapter 3).