Pages

20 October 2015

The psychology of nuclear restraint

Jacques E. C. Hymans

Hymans is Associate Professor of International Relations at the University of Southern California and author of...

In August 1945, the United States dropped two bombs that changed the image of war. The attacks on Hiroshima and Nagasaki were not more destructive than the March 1945 air raids on Tokyo. In Tokyo as well, the loss of life and property was prodigious. But still, Hiroshima and Nagasaki were different. The most important difference was the fact that so few bombs had done so much damage. Hiroshima and Nagasaki thus opened up a new era in humankind’s destructive potential. It was now feasible for just a few individuals, in a few minutes, to commit genocide.

The “orthodox interpretation” among historians is that the United States chose to drop the bombs because it wanted to avoid having to invade the main Japanese islands, which inevitably would have killed thousands of American GIs and an even greater number of Japanese. The contrasting “revisionist interpretation” is that the United States knew that Japan was ready to give up but dropped the bombs anyway, as a stern message to the Soviet Union. Then there is the “compromise interpretation” that the Truman administration was primarily motivated by the goal of winning the war with Japan, but also considered such a display of force as useful for advancing American interests against the Soviets.

In a brilliant book, Five Days in August, Princeton historian Michael Gordin shows that all of these interpretations are guilty of presentism: the tendency to interpret past actions on the basis of current opinions. Gordin asks, did the United States really believe before the event that the bombs were the silver bullets, the war-winning weapons that it came to believe they were after the event? His answer is no.

Gordin demonstrates that practically no one in the US political and military elite believed that dropping one or two of these new bombs would quickly end the war. The bombs were viewed as an incremental improvement over existing weapons, not as a trump card that could overturn the existing strategic situation. There is no evidence that the US military believed it was facing a choice of irradiating Hiroshima and Nagasaki or invading the Japanese home islands. Instead, the atomic bombings were seen as just one more way to soften up Japan for the inevitable invasion. The military therefore continued to launch massive conventional airstrikes even as it launched nuclear ones, and it continued to plan furiously for the ground war to come. The destruction of Hiroshima and Nagasaki also produced no uptick in the military’s planning for postwar demobilization. These historical facts greatly undermine the standard portrayal of the Truman administration’s decision to drop the bomb as a means of winning the war quickly. I should note that unlike the military and political elites, atomic scientists such as Niels Bohr demonstrated considerably more awareness of the bomb as a qualitative transformation in war. But the scientists were not in command. The people at the apex of the American war effort did not treat the question of “to use or not to use” the bomb as the momentous decision that most historians have assumed they did. 

The United States sleepwalked its way into the nuclear age. Could it have been otherwise? If some other country had built the first bomb, would its leaders have thought longer and harder before dropping it on heavily populated urban areas without prior warning? It is always tricky to engage in counterfactual reasoning, but in this case there is some comparative historical data. Great Britain was a joint partner with the United States in the Manhattan Project, and at a meeting in Quebec in 1943, the two countries agreed that any use of bombs produced by the project would require their joint prior approval. Therefore Prime Minister Winston Churchill had a formal say over what was to be done. But as I have reported in my 2009 article“Britain and Hiroshima” in the Journal of Strategic Studies, when the time came for decision, the British simply handed the Americans a blank check to use the new weapons against Japan, whenever, wherever, and howsoever. This blank check was offered because Churchill, like the Americans, saw the bomb as a quantitative rather than qualitative change. Indeed, back in 1941, when Churchill had approved the original solo British effort to build the bomb, he had done so with the following words: “Although personally I am quite content with the existing explosives, I feel we must not stand in the path of improvement.” Churchill’s blasé attitude about the bomb’s potential value as a strategic weapon remained unchanged up until the Potsdam conference of mid-July 1945.

In sum, in both the United States and the United Kingdom, the people directly responsible for making the choice to drop the bomb on Japan drastically underestimated the strategic impact of that choice. Therefore, the historians’ orthodox interpretation is wrong to suggest that the bombs were dropped to save tens of thousands of American and Japanese lives. And the historians’ revisionist interpretation is wrong to suggest that the bombs were dropped to show the Soviets that Washington was boss. The bombs were dropped simply because they were bombs, and the United States and United Kingdom at that time were very much in the business of bomb-dropping. 

Since 1945, the United States, United Kingdom, and other powers have dropped a lot of bombs on a wide variety of adversaries. But not this type of bomb. Why not?

What explains the long nuclear peace? The game theorist Thomas Schelling opened his Nobel Prize lecture in 2005 with the words, “The most spectacular event of the past half century is one that did not occur. We have enjoyed 60 years without nuclear weapons exploded in anger.” By now, the streak has stretched to 70 years. This history of worldwide nuclear restraint needs to be better known and better understood.

Most observers have attributed the long nuclear peace to the workings of nuclear deterrence. They think that it is cost-free for amoral states in the condition of international anarchy to drop the bomb on other states that cannot retaliate in kind, and they point to Hiroshima and Nagasaki as proof. If only Japan had had nuclear weapons, they argue, Hiroshima and Nagasaki would have never happened. Perhaps so. But even if one assumes that deterrence has been effective in preventing nuclear conflict between nuclear-armed states, why have there been no nuclear attacks since 1945 against non-nuclear-armed states either? We now know that top officials in the White House and the Pentagon did seriously consider using nuclear weapons in Vietnam, for instance. But in the end, they decided that using nuclear weapons in that war was simply a bridge too far. Their self-restraint was remarkable. The most powerful state in the world chose to accept a huge strategic defeat without first unleashing its most powerful weapons against an opponent that could not respond in kind. How could this be?

The key variable that changed in August 1945 and led to 70 years of nuclear peace was the perceived enormity of the choice to drop the bomb. Truman and Churchill made the decisions that led to Hiroshima and Nagasaki without believing that the atomic bomb was qualitatively different than other weapons that had been introduced over the course of a long and bloody war. Given their thinking about the bomb as a merely quantitative improvement in firepower, it is not surprising that they hardly wasted any time considering whether or not to use it. But after Hiroshima and Nagasaki and the Japanese surrender, it was impossible for any leader of a nuclear-armed state to make such a choice so casually. The clear post-1945 perception of the enormity of the decision to use nuclear weapons—enormity in every sense: military, political, ethical—has been a major obstacle to any such decision. That perception of enormity is the essence of the so-called “nuclear taboo.” Before the perception of enormity existed, we dropped the bombs; after that perception of enormity came into being, we refrained from dropping them.

I am not saying that it became impossible after Hiroshima and Nagasaki to imagine that any leader could ever convince him or herself to order a nuclear attack. But still, when a leader knows that he or she is facing a big decision, usually that translates to greater caution. For instance, Richard Nixon prided himself on being a tough guy, and he was trying to work himself up into making a decision to drop the bomb on Vietnam and/or China in October 1969. He called his flirtation with the idea the "madman theory." That label, “madman theory,” is very apropos. Nixon may have been itching to break the nuclear taboo, but he also knew there was a taboo that would have to be broken. Therefore, in the end he let Henry Kissinger talk him down from the idea. Nixon and Kissinger notably did not apply the same “madman theory” label to the idea of an illegal but conventional bombing of Cambodia, and therefore they easily approved that policy, even though in truth it was also mad.

The perception of the revolutionary character of nuclear weapons has also promoted the long nuclear peace in other, indirect ways. For example, it stands to reason that the more states with the bomb, the higher the chances that one or another of them will use it. Therefore, a major part of the reason for the long nuclear peace has been the success of the struggle for nuclear non-proliferation. Today, despite the passage of seven decades since the invention of nuclear weapons, there are still only nine states in the world that have the bomb. A few other states have tried to get it at various points in time; but still, the degree of self-restraint around the world has been remarkable. We need to explain why the idea of possessing the bomb has not exercised strong magnetism for most states around the world.

A key part of the answer to this puzzle lies, once again, in the global perception of the bomb’s enormity. As I argue in my 2006 book The Psychology of Nuclear Proliferation, the fundamental reason why so many leaders have avoided making clear decisions to go for the bomb is the difficulty of knowing what such a revolutionary act would bring for them and their countries.

The acquisition of the bomb is likely to have very large consequences that are also very unpredictable. There are at least five interrelated dimensions on which leaders must think through the decision to build a nuclear arsenal: military, diplomatic, economic, domestic institutional, and ethical. And knowledgeable elites’ opinions tend to differ widely about the likely effects of getting the bomb on all of these dimensions. In short, to go nuclear is to take a leap into the dark.

Ever since 1945, experienced politicians who found themselves faced with this decision have realized its incalculability. And if you can’t calculate the risks involved, then you can’t determine if you are willing to accept them. Therefore, the vast majority of state leaders who could have decided to go for the bomb have simply punted on the question. They haven’t definitively ruled out getting nuclear weapons, but they also haven’t definitively ruled it in. They have chosen to wait to make a final decision until they can see the pluses and minuses more clearly—but that day almost never comes.

Of course, some leaders are more willing to reap the whirlwind than others are. But luckily, it is quite rare to find such people at the apex of states—and it is especially rare to find them at the apex of those states that have enough internal institutional infrastructure to be able to mount a credible nuclear weapons program while withstanding international pressures to cease and desist. As for terrorist groups, the same craziness that leads many of them to desire nuclear weapons also leads them to fail to create organizations that are capable of working systematically to achieve their nuclear ambitions. Because reckless political decision-making tends not to go together with sober project management, nuclear proliferation has been a mere trickle, not the widely feared flood. 

In sum, my basic storyline is that if we wish to explain the long nuclear peace, we need to focus above all on the perceived enormity of nuclear weapons, which takes us back to the memory of Hiroshima and Nagasaki. It is easy to lament how often humanity forgets the lessons of its past mistakes, but the evidence of the last 70 years suggests that humanity actually did durably learn an important lesson from the atomic bombings of Japan. This lesson may not have slain the nuclear dragon, but it did keep it in its cage.

How much longer can nuclear peace persist? The perception of the enormity of nuclear weapons has been highly effective in causing state leaders to hesitate before they press the button—or even before they create the button. But still, this is not a foolproof system. We are dependent on a very small number of people to continue making correct choices for themselves and for our planet. The only way to achieve complete security from the threat of nuclear weapons is to get rid of them all: nuclear abolition. It is easy to be skeptical about the prospects for nuclear abolition. But if it was possible to outlaw slavery globally, and if it was possible to outlaw chemical and biological weapons, then it may also be possible to outlaw nuclear weapons.

Policymakers and activists are in the best position to evaluate the feasibility of different abolitionist strategies. But I think it’s useful to focus more attention on how merely making efforts to promote abolition can bolster the strongest currently existing obstacle to nuclear attack, which is the perceived enormity of nuclear weapons. In other words, yes, it would be wonderful to achieve nuclear abolition; but even if abolition never comes, still the pursuit of abolition can be a great success by reinforcing the nuclear taboo. Indeed, the insistent demand for abolition is probably the most effective way of reinforcing the taboo.

If I am right that nuclear peace is to a large extent dependent on the maintenance of the global nuclear taboo, then we urgently need to know whether the taboo is getting stronger or weaker today.

According to the logic that Schelling spells out in his Nobel Prize address, there is reason to believe that the taboo is getting stronger. Schelling’s argument is that the taboo on nuclear use is self-reinforcing. The longer it has lasted, the longer it will last. Breaking a tradition that has lasted seven years is one thing; breaking a tradition that has lasted 70 is something very different. Moreover, the self-reinforcing character of the nuclear taboo is not just about numbers of years; it is also about precedents. For instance, because Lyndon Johnson and Richard Nixon decided in the past that it was inappropriate to use the bomb against the Viet Cong, it is more difficult for Barack Obama to decide now that it would be appropriate to use the bomb against ISIS.

On the other hand, the more a tradition ages, the more the memory of the original events that brought that tradition into being may fade, and as a result society may gradually lose its determination to maintain the tradition. From this point of view, it is very worrisome that Americans’ general level of knowledge about nuclear history is falling fast. For instance, in a public opinion poll in the mid-1990s, Americans were asked, true or false, if the first atomic bomb attack in history was against Japan. That question was flunked by only 16 percent of people over the age of 64, but it was flunked by 36 percent of people under 30. Fading memories of the atomic bombings are not just an American phenomenon. In Japan, the overall level of knowledge about Hiroshima and Nagasaki is naturally much higher, and yet not as high as one might think. For instance, a survey by the Japanese national broadcaster NHK in 2005 quizzed Japanese and Americans about the precise date of the atomic attack on Hiroshima. Hiroshima Day, August 6, always garners major media attention in Japan, and NHK broadcasts the entire commemoration ceremony live. Yet the survey found that overall only 37.5 percent of Japanese knew the precise date, and among Japanese people in their 20s and 30s, only 27 percent knew it. Among Americans, by the way, only 3.5 percent knew it.

The polls show that the memory of Hiroshima and Nagasaki is fading with each passing generation. Yet the same polls also show that the young are actually more antagonistic to the possession and use of nuclear weapons than their elders are. The memory is fading, but the taboo is growing. These trends indicate that Schelling is right to place emphasis on the self-reinforcing nature of the tradition of nuclear non-use. Maybe it is not necessary for people to know the details of history in order for their thinking to be deeply marked by it. Young people might not have heard much about Hiroshima, but they do know that they are opposed to nuclear weapons. On the other hand, it could also be that we are living in a time of transition, and that if the memories of Hiroshima and Nagasaki fade any more, anti-nuclear weapons sentiment will start to fade as well. The next generation of young people, having forgotten Hiroshima while experiencing repeated hyper-realistic, 3-D, CGI fantasies of mass destruction with a soft drink in hand, may not be as bothered by the idea of another nuclear strike happening in the real world.

Because maintaining the taboo is a matter of shaping global perceptions, it is clear that alongside governmental policy and non-governmental activism, nuclear scholarship also has a vitally important role to play.

Nuclear scholars’ responsibility, in a nutshell, is to lay bare the complexity and uncertainty that are inherent to the acquisition and use of nuclear weapons. This is their responsibility both as seekers of truth and as promoters of the long nuclear peace. Unfortunately, there is a strong tendency in the American field of international security studies to try to do the opposite and characterize nuclear decision-making as if it were the product of relatively simple strategic calculations. The field’s tendency toward oversimplification of the nuclear issue is the result of its overall bias for broad generalizations based on reductionist models of man, the state, and war. For instance, researchers often describe a nuclear weapons arsenal with the term “nuclear deterrent,” and following on that, they analyze states’ nuclear choices exclusively through the optic of rational deterrence theory—even though the complicated organizations that handle nuclear weapons bear little if any resemblance to the unitary rational actors posited in the theory.

But there is also excellent nuclear scholarship. For example, the Stanford sociologist Lynn Eden’s book Whole World on Fire is a model of analytically rigorous and empirically rich social science that not only advances our understanding of the world, but also simultaneously reinforces the normative case for disarmament. In her book, Eden demonstrates that from the outset of the nuclear age down to the present day, the US military has systematically underestimated the physical impact of its nuclear weapons by focusing exclusively on their blast effects, while ignoring their fire effects. In the case of Hiroshima, the fire was far more devastating than the blast. But the United States has turned its eyes away from this reality. Thus its technical understanding of its prize possession has been deeply mistaken.

This mistaken technical understanding has had pernicious behavioral consequences. Ignoring nuclear weapons’ fire effects during the Cold War encouraged a dramatic overestimate in the number of weapons that were supposedly necessary to fulfill NATO’s nuclear war plans against the Warsaw Pact. The overestimate in turn justified the fantastic expenditures and dangerous arms race that characterized the Cold War. Ignoring fire effects also made nuclear weapons seem usable for tactical purposes, even as battlefield weapons, thus encouraging presidents to think seriously about employing them in places like Korea and Vietnam.

Why did the United States hardly even try to estimate the fire effects of a nuclear explosion? Eden reports that it didn’t include estimates of fire effects because it told itself that they were too unpredictable. That belief in the unpredictability of fire effects was actually incorrect, but no matter. The nuclear weapons establishment chose not to recognize the full panoply of effects of its own weapons, because it did not want to be overwhelmed by the complexity of those effects. It was apparently afraid that if it did recognize the complexity, the nuclear arsenal might be paralyzed into inaction. But is such paralysis really something to be worried about? Isn’t it something to promote?

Eden’s study, like all the best nuclear scholarship, emphasizes how big these weapons are, in both physical and symbolic terms—too big, really, to be handled properly by mere mortals. The bigness of nuclear weapons is the simple but essential insight that comes from serious contemplation of the bomb’s mysteries. The world needs more studies that elaborate this fundamental point. It also needs to hear more about such studies. If the results of the best nuclear scholarship can find their way into the common sense of global society, this will greatly improve the chances that the long nuclear peace will persist far into the future.

Editor's note: This essay is adapted from a lecture the author gave on September 30 to a conference at the University of California Berkeley Center for Japanese Studies, “Perspectives on 70 Years of the Nuclear Age: From Berkeley, a Birthplace of the Atomic Bomb.”

No comments:

Post a Comment