Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Chapter 3 Describe and explain the terms climate sensitivity, fat tail, and low beta, and how they relate to climate change. What are the estimated

Chapter 3

  1. Describe and explain the terms climate sensitivity, fat tail, and low beta, and how they relate to climate change.
  2. What are the estimated economic costs of global warming? Can we trust these estimates at higher average temperature increases? Why/why not? Are the current estimates of the optimal carbon price too low or too high? Explain and relate to the "cost of the tails".

Chapter 4

  1. Describe and explain the term willful blindness, and how it relates to climate change
  2. Why does standard cost benefit analysis fall apart when we try to put a sufficiently high price on carbon to avoid a catastrophe?

Data:

I n 1995, the Intergovernmental Panel on Climate Change (IPCC) declared it was "more likely than not" the case that global warming was caused by human activity. By 2001, it was "likely." By 2007, it was "very likely." By 2013, it was "extremely likely." There's only one step left in official IPCC lingo: "virtually certain." The big question is how certain the world needs to be to act in a way that is commensurate with the magnitude of the challenge. An equally important question is whether all this talk of certainty is conveying what it ought to convey. The increasing likelihood of anthropogenic climate change has three sides to it. Only one of them is good. The first piece of bad news is that we humans are, in fact, increasing global temperatures and sea levels alike. It would have been cause for celebration if, say, the 2013 report decided that science had gotten it wrong all along. Imagine the New York Times headline: "IPCC says decade 'without warming' here to stay." Alas, no such luck. Modern atmospheric science once again confirmed the basic ideas of high school chemistry and physics, going back to the 1800s: more carbon dioxide in the atmosphere traps more heat. The good news then, in some twisted philosophical sense, is the confirmation of bad news. Climate science has progressed over the past couple decades to the point where it is now able to make the definitive statement that global warming is extremely likely caused by human activity. We Fat Tails 49 know enough to act. Ignoring that reality, by now, would amount to willful blindness. But there's an additional piece of bad news: the false sense of security conveyed by all this talk about certainty. At least by one important measure, we don't appear to be closer to understanding how much our actions will warm the planet than we were in the 1970s, at the dawn of modern climate science and long before the first IPCC report. Worse still is that what we have learned since then points to the fact that what happens at the very extremesthe tails of the distributionmay dwarf all else. SENSITIVE CLIMATES In 1896eight decades before Wally Broecker coined the term "global warming," and long before anyone knew what a climate model wasSwedish scientist Svante Arrhenius calculated the effect of doubling carbon dioxide levels in the atmosphere on temperatures. Arrhenius came up with a range of 5 to 6C (9-11F). That effectwhat happens to global average surface temperatures as carbon dioxide in the atmosphere doubleshas since become known as "climate sensitivity" and has turned into an iconic yardstick. Climate sensitivity itself is already a compromise, a way of making an incredibly complex topic slightly more tractable. The parameter does have a few things going for it. For one, the starting level of carbon in the atmosphere doesn't matter, at least not by much. One of the few rather well-established facts is that eventual global average temperatures scale linearly with percentage changes in underlying carbon dioxide concentrations. The first 1 percent increase in carbon in the atmosphere has a similar impact 50 Chapter 3 as the 100th. Any doubling of concentrations, from anywhere within a reasonable range, leads to roughly the same eventual increase in global temperatures. The definition of climate sensitivity plays off that fact. A doubling of preindustrial carbon dioxide levels of 280 parts per million (ppm) seems all but inevitable. The world has just passed carbon dioxide concentrations of 400 ppm, and levels are still rising at 2 ppm per year. Counting other greenhouse gases, the International Energy Agency (IEA) estimates that the world will end up somewhere around 700 ppm by 2100two-and-a-half times preindustrial levelsunless major emitters take drastic additional steps. Luckily, Arrhenius's climate sensitivity range of 5- 6C (9-11F) has proven to be too pessimistic. In 1979, a National Academy of Sciences Ad Hoc Study Group on Carbon Dioxide and Climate concluded that the best estimate of climate sensitivity was 3C (5.4F), give or take 1.5C (2.7F). "Conclude" may be a bit strong a term to use here. The process is commonly retold thus, not without admiration for academic genius at work: Jule Charney, the study's lead author, looked at two prominent estimates at the time 2C (3.6F) on one end and 4C (7.2F) on the other averaged them to get 3C (5.4F), and added half a degree centigrade on either end to round out the range because, well, uncertainty. Thirty-five years of ever more sophisticated global climate modeling later, our confidence in the range has increased, but what's now called the "likely" range of 1.5 to 4.5 C (2.7 to 8F) still stands. That should be a tipoff right there that something rather strange is happening. There's something stranger still. Fat Tails 51 A PLANETARY CRAPSHOOT The IPCC defines "likely" events to have at least a 66 percent chance of occurring. That still tells us nothing about whether things may turn out all rightwith climate sensitivity closer to 1.5C (2.7F)or not all right at allcloser to 4.5C (8F). Taking the IPCC probability descriptions literally, the chance of being outside that range would be up to 34 percent. There's no precise verdict as to where these 34 percent go, though there's clearly more room above 4.5C (8F) than below 1.5C (2.7F). See figure 3.1. For any numbers below 1.5C (2.7F), we could rightfully celebrateideally with a bottle of Champagne, flown in from France for the occasion, and letting out an extra puff of carbon dioxide when opened. Though that's unlikely. And not even these low climate sensitivity realizations of 1.5C (2.7F) provide a guarantee that climate change won't be bad. In fact, quite the opposite: At 700 ppm, final temperatures would still rise to higher than where they were over three million years ago. Think 40% 35% 30% 25% 20% 15% 10% 5% 0% 0C 1C 2C 3C 4C 5C 6C 7C 8C 9C 10C Eventual global average warming due to a doubling of carbon dioxide (climate sensitivity) "likely" range Probability density Figure 3.1 Eventual global average surface warming due to a doubling of carbon dioxide (climate sensitivity) 52 Chapter 3 back to camels in Canada, who happily marched in what is now frozen tundra at temperatures of 2-3.5C (3.6-6.3F) above preindustrial levels. And we'd be at 2C (3.6F) with a climate sensitivity of 1.5C (2.7F), which is at the lower edge of the likely range. All that makes our inability to exclude climate sensitivities above 4.5C (8F) all the more significant. Any probability of climate sensitivity that high should make for (heatinduced) shudders. The most important question then is: how fast does the chance of hitting any of these higher climate sensitivity figures go to zero as the upper bound of climate sensitivity increases? One could imagine an extreme scenario in which the chance that climate sensitivity is above 4.5C is greater than 10 percent, but if the chance of being above 4.6C were zero, we could exclude any even higher numbers. If only the planet were that lucky. It's extremely unlikelyin the English rather than the strict IPCC sense of that termthat the probabilities of higher climate sensitivities would drop off that quickly. It's much more likely that the chance of hitting higher temperatures tapers off at an uncomfortably slow pace, before hitting something close enough to zero to provide a reasonable level of comfort that even more extreme numbers won't materialize. That scenario is closer to what statisticians describe as a "fat tail." The probability of 4.6C is smaller than for 4.5C, though not by much. The all-important question, then: how likely is a potentially catastrophic realization of climate sensitivity? The IPCC says it's "very unlikely" that climate sensitivity is above 6C (11F). That's comforting but for its definition of just what "very unlikely" means: a chance of anywhere between 0 and 10 percent. And that range is still only the Fat Tails 53 likelihood that climate sensitivity is above 6C (11F), not actual temperature rise. Let's jump right to the conclusion. Take the latest consensus verdict at face value and assume a "likely" range for climate sensitivity of between 1.5 and 4.5C (2.7 and 8F). Equally important, stick to the IPCC definition of "likely" and assume it means a chance of greater than 66 percent, but less than 90 percent. (The latter would be "very likely.") And take the IEA's interpretation of current government policy commitments at face value. Here's what you get: about a 10 percent chance of eventual temperatures exceeding 6C (11F), unless the world acts much more decisively than it has. Figure 3.2 and table 3.1 are the culmination of parsing umpteen scientific papers and countless hours spent fretting over how to get it just so. Rows 1 and 2 in the table Figure 3.2 Eventual global average surface warming based on passing 700 ppm CO2e 40% 35% 30% 25% 20% 15% 10% 5% 0% 0C 1C 2C 3C 4C 5C 6C 7C 8C 9C 10C Eventual global average warming based on passing 700 ppm CO2e >10% Probability density 54 Chapter 3 represent the move from carbon dioxide-equivalent (CO2e) concentrations in the atmosphere to ultimate temperature increases. Row 3 shows the corresponding chance of exceeding final average temperature increases of 6C (11F). Whenever we had to make a judgment call of where to go next, we tried to take the more conservative turn, which may well underplay some of the true uncertainties involved. The scariest bit is just how fast the chance of eventual temperatures exceeding 6C (11F) goes up. Compare changes in the median temperature increase with the chance of passing 6C (11F). Going from 400 to 450 ppm, the difference between 1.3C (2.3F) and 1.8C (3.2F) for the most likely temperature increase, may not be all that much. There may be some, potentially irreversible, tipping points along the way, but ultimately it's only half a degree centigrade (less than a degree Fahrenheit), or an increase of a bit more than a third. At the same time, the chance of exceeding 6C (11F), the last row, just jumped from 0.04 percent to 0.3 percent, almost tenfold. All that's just for moving from 400 to 450 ppm, while the world has already passed 400 ppm for carbon dioxide alone and 440 to 480 Table 3.1. Chance of eventual warming of >6C (11F) rises rapidly with increasing CO2e concentrations CO2e concentration (ppm) 400 450 500 550 600 650 700 750 800 Median 1.3C 1.8C 2.2C 2.5C 2.7C 3.2C 3.4C 3.7C 3.9C temperature (2.3F) (3.2F) (4.0F) (4.5F) (4.9F) (5.8F) (6.1F) (6.7F) (7.0F ) increase Chance of 0.04% 0.3% 1.2% 3% 5% 8% 11% 14% 17% >6C (11F) Fat Tails 55 ppm for carbon dioxide-equivalent concentrations! A further jump to 500 ppm increases that chance of catastrophe to 1.2 percent. By the time concentrations reach 700 ppm where the IEA projects the world will end up by 2100 even if all governments keep all their current promisesthe chance of eventually exceeding 6C (11F) rises to about 10 percent. That looks like the manifestation of a fat tail, if there ever was one (even though strictly speaking we don't even assume that property in our calculations; our tail is "heavy," not quite "fat" in statistical terms). At 700 ppm, the median temperature increase would be 3.4C (6.1F). This alone would be a profound, earthas-we-know-it-altering change. Polar regions would likely warm by at least twice that global average, with everything that entails. The costs would be staggering and should have prompted the world's leaders to head off such a possibility long ago. Yet those costs are still nothing compared to what would happen if final temperatures were to exceed 6C (11F). It's the roughly 10 percent chance of near-certain disaster that makes climate change costlier still. Now we are truly in the realm of what Nassim Nicholas Taleb describes as a "Black Swan" and Donald Rumsfeld as "unknown unknowns." We don't know the full implications of an eventual 6C (11F) temperature change. We can't know. It's a blind planetary gamble. Devastating home fires, car crashes, and other personal catastrophes are almost always much less likely than 10 percent. And still, people take out insurance to cover against these remote possibilities, or are even required to do so by laws that hope to avoid pushing these costs onto society. Risks like this on a planetary scale should notmust notbe pushed onto society. 56 Chapter 3 "Must not" is a strong phrase. It conjures images of bans orin dollars and centsinfinite costs. That goes head-tohead against any economist's idea of trade-offs. The costs of global warming may be high, perhaps higher than anyone thought possible. But surely, they can't be infinite. MONEY IS EVERYTHING Trying to estimate the eventual temperature increase is one thing. But even if we knew that number with any precision for that one hot day in August 2100 in Phoenix, what actually concerns us isn't necessarily how high temperatures climb. We care more about climate impacts, and how much they will cost society. Sea-level rise is one. Another is extreme events like droughts or hurricanes that might hit your home long before rising sea levels would drive you from it altogether. The business of pinning down specific impacts is messy and fraught with its own uncertainties. There are known unknowns aplenty. Unknown unknowns may yet dominate. And tipping points and other nasty surprises seem to lurk around every corner. Some of them may put warming itself on overdrive. Releasing vast carbon deposits in Siberian or Canadian permafrost could prove to be a tipping point resulting in bad global warming feedbacks. Others may have relatively less influence on actual temperatures but have plenty of other impacts. Melting of Greenland and the West Antarctic ice sheets alone already raises sea levels by up to one centimeter (0.4 inches) each decade. If the Greenland ice sheet fully melted, sea levels would rise 7 meters (23 feet). Full melting of the West Antarctic ice Fat Tails 57 sheet would add another 3.3 meters (11 feet). That's not happening tomorrow or even this century. The IPCC's estimates of global average sea-level rise for this century top out at 1 meter (3 feet). But the tipping point at which the full eventual melting becomes inevitable will be passed much sooner. We may have already passed the tipping point for the West Antarctic ice sheet. These compounding uncertaintiesfirst from emissions to concentrations to temperatures, and then from temperatures to ultimate impacts measured in dollars and centsmake things extremely hard to get right. That hasn't stopped economists from trying. One of the best is Bill Nordhaus. His DICE model short for Dynamic Integrated Climate-Economy model has been publicly available since the early 1990s. Generations of graduate students have played around with it, tried to poke holes in it, and derived estimates of "optimal" global climate policy. Nordhaus's own estimates of the social cost of carbon have been going up ever since the model was first released in 1992. Back then, his economically optimal response to climate change was a global carbon tax of about $2 per ton of carbon dioxide (in 2014 dollars). That went hand in hand with global average warming climbing to 4C (7.2F) and beyond. In the tug-of-war between economic growth and a stable climate, growth won. Climate impacts have been catching up ever since, pushing unfettered, fossil-fueled growth further and further from being optimal. Today, Nordhaus's preferred "optimal" estimate is around $20 per ton of carbon dioxide. The resulting final temperature increases now top out at around 3C (5.4F). The search for the optimal carbon price is a hot button issue. Nordhaus's formally derived $20 is lower still than 58 Chapter 3 the average estimate of $25 per ton, presented in his own book as an "illustrative" example. That, in turn, is lower than the current "central" U.S. government's estimate of around $40, derived from a combination of outputs from DICE and two other assessment models. None of that yet factors in the proper cost of the tails, fat or otherwise. Nordhaus's maximum average temperatures may stay below 3C (5.4F), but that's the average. It still leaves unspecified the probability of topping 6C (11F) or more. Some other estimates attempt to take uncertainty more seriously. The U.S. government itself presents what it calls the "95th percentile estimate" as a proxy of sorts for capturing extreme outcomes. The optimal number there: over $100 per ton of carbon dioxide emitted today. What then does the central $40 estimate include, and how is it derived? Two key issues loom large: dollar estimates of damages caused, and discounting. We'll address them in turn. HOW MUCH FOR A DEGREE OF WARMING? Compare the average climates in Stockholm, Singapore, and San Francisco. Winters in Sweden are long, cold, and dark. You'll have to wait for the summer months to get average highs above 20C (68F). Singaporeans don't have this problem. Their average low is higher than Stockholm's average high year-round. All that makes San Franciscans feel smug, fog and all. They enjoy stable Mediterranean climates year-round, with a week of rain in "winter." Still, all three cities are thriving metropolises. Historians may even argue that all of them got their start because of winning geographies. What, then, should lead us to believe that one Fat Tails 59 climate is better or worse than another? Or that warmer average global temperatures come with costs? The costs of climate change aren't the result of moving away from some mythical optimal climate. Stockholm may be a more pleasant place with a degree or two extra. Incidentally, that's precisely what Swedish scientist Svante Arrhenius, of greenhouse effect fame, suggested we may want to do deliberately: burn more coal "to enjoy ages with more equable and better climates, especially with regards to the colder regions of the earth." In Arrhenius's defense, he said so in 1908, after he had identified the greenhouse effect, but long before it became clear that there are significant costs to pumping carbon dioxide into the atmosphere. In the end, the costs of small temperature changes are, for the most part, the sum of the costs of changing what we've gotten used to. And it's not just that Swedes already own winter jackets and Singaporeans air conditioners. It's massive investments and industrial infrastructures, built around current climatesand current sea levelsthat make temperature increases costly. And once again, it isn't temperatures themselves that matter as much as what these rising temperatures entail. One such effect is rising sea levels. Then there are storm surges on topby then stronger and more frequent precisely because of climate change. And all that's the perfectly "normal," average effect of sea-level rise baked into where we are already heading. None of this is yet taking into account fat tails or other catastrophic scenarios. When models incorporate the latest science and quantify ever more of the damages likely to occur because of climate change, the estimated costs of carbon pollution go up. DICE & Co are perennially playing catch-up with the latest science. In 2010, the central U.S. government's 60 Chapter 3 estimate of the social costs of one ton of carbon dioxide emitted in 2015 was around $25. The 2013 iteration increased it to around $40. None of this is meant to decry the modeling efforts. Quite the opposite. Getting things right is incredibly difficult. If anything, it is a call to invest in economic modelingin a big way. Nordhaus's DICE model, as well as its main competitors, FUND and PAGE, were all started by one person, and have been painstakingly maintained, patched, and modified over years and decades by a small group of dedicated economists. Meanwhile, when big business tries to analyze what toothpaste flavor to sell where, it uses massive quantities of geo-spatial, customer-level data, analyzed by dozens of dedicated statisticians and programmers. We certainly shouldn't scrap economic climate models for their inadequacy. If anything we should be supercharging them: IBM-ifying their operation. There's much more at stake here than with selling toothpaste. Yet Colgate and Procter & Gamble are competing with the help of massive data operations, while DICE can run on your home computer. More manpower and data would at least help the models incorporate the latest available information in real time. Even if we did all of that, though, there would still be one major problem: How should we quantify the damages caused by potentially catastrophic climate change? More data won't necessarily help us make inroads on that question. DICE & Co mostly look to the past for guidance. Hundreds of scientific studies try to quantify the impacts of Fat Tails 61 global warming on anything from sea-level rise to crop yields to tropical storms to war. The task then is to translate those impacts into dollars and cents. We quickly run into two problems. For one, only a small part of known damages can be quantified. Lots are missing. The list of currently unquantified andat least in partunquantifiable damages spans everything from known respiratory illnesses from increased ozone pollution due to warmer surface temperatures to the effects of ocean acidification. Moreover, the only parts we can truly quantify are in a relatively narrow, low-temperature range: changes of fractions of a degree, maybe 1C (1.8F), or maybe 2C (3.6F) of global average warming. How can we estimate what happens at 5, 4, or even 3C (9, 7.2, or 5.4F)? Extrapolate, extrapolate, extrapolate. That's at least what current models do. Take what happens at 1 or 2C and scale it up. We know that, because of tipping points and other possibly nasty surprises, we can't just look at things linearly. No one seriously proposes that. Instead, DICE mostly relies on something close to quadratic extrapolations: If 1C causes $10 worth of damages, then 2C doesn't cause $20that's linearbut $40. More specifically, Nordhaus estimates that warming of 1C costs less than 0.5 percent of global GDP, 2C costs around 1 percent, 4C costs around 4 percent. Things take off after that, but even 6C stays below 10 percent. Mind you, that's a big absolute number: 10 percent of total, global economic output today would be around $7 trillion. If they were to materialize, by the time these 6C (11F) changes were to hit a century or more from now, the fraction of damages would be multiplied by a large growth factor. But how can we be certain that it's the right number? We can't. Once we extrapolate damage estimates 62 Chapter 3 as far out as 6C (11F), it all becomes guesswork. Using a quadratic function is a convenient shortcut, but it's not much more than that. Lots of other extrapolations would fit the observed damages on the lower end of the scale but would yield wildly different results on the upper end. For instance, figure 3.3 shows how estimating exponential rather than quadratic warming yields starkly different results: For 1C and 4C, the two lines are identical. For 2C and 3C, they are close enough to be indistinguishable given the uncertainties. At 5C, things begin to diverge. By 6C, they might as well be describing different planets. The quadratic extrapolation ends up at a bit under 10 percent of global economic output. The exponential comes in closer to 30 percent. Figure 3.3 Quadratic and exponential extrapolations of global economic damages 30% 25% 20% 15% 10% 5% 0% 1C 2C 3C 4C 5C 6C Eventual global average warming Exponential Quadratic Damages in global economic output Fat Tails 63 We aren't saying a 30 percent decline in output is any more correct than 10 percent decline should global average temperature increases hit 6C (11F). We just don't know. And no one else does either. One could tell stories about how 10 percent may be too high because people will be able to cope. Even with 6C (11F) of warming, Stockholm then will still be cooler than Singapore now. Or one could tell stories around how 30 percent may still be too low because neither Stockholm nor Singapore would be around to see the day. Their current coast lines would be on track to be submerged under several meters of water. Would, not may. But it's once again the deep-seated uncertainties around the eventual extent and timing of the consequences that add to the true costs. It's also not at all clear that we should be thinking about damages as a percentage of output in any given year. Standard practice for DICE and other models is to assume that the economy hums along just fine until damages from climate change get subtracted at some point in the future. Catastrophic or not, conventional estimates of climate damages will feel small compared to the amazing increases in wealth that economic growth is assumed to bring. At a 3 percent annual growth rate, global economic output will increase almost twenty-fold in a hundred years. Subtracting 10 percent, 30 percent, or even 50 percent for climate damages after a hundred years will still leave the world many times richer than it is today. Climate change, in short, may be bad, but even the worst seems to leave the world much better off so long as economic growth remains robust. Instead assume that damages affect output growth rates rather than output levels. Climate change clearly 64 Chapter 3 affects labor productivity, especially in already hot (and poor) countries. Then the cumulative effects of damages could be much worse over time. That's the beautyor here, the uglinessof compound growth rates. All it took was a small but all-important change in a fundamental assumption. Lastly, the way that climate damages are assumed to interact more generally with economic output matters a lot. DICE & Co assume that climate damages are a simple fraction of GDP: the higher the temperature, the greater that fraction. That seems like an innocuous enough assumption, but there are some stark implications. GDP and temperature just became interchangeable. Or rather: climate damages amounting to 1 percent of output can always be offset by a 1 percent increase in the output itself. More GDP is good. If more GDP implies higher damages, increase GDP further and the world will still be better off. It's in the DNA of many economists to make that assumption. Growth, after all, is generally good. Alas, not all environmental damages can be offset so easily just by increasing GDP. Loss of human lives, ecosystems, or food aren't compensated so readily by increased consumer electronics. To put it in more stark terms, if the global food supply suffers from climate change, boosting GDP by building more iPhones won't do much for those who are starving. Coming up with better ways to produce food would. That's typically the rejoinder of those in favor of using the standard multiplicative model of damages. Human ingenuity has seemingly outpaced environmental degradation in the past. Things always seem to be getting cheaper, smaller, faster, better. Technology will win the day once again. Maybe. But what if there are limits? What if we can't, at some point, substitute away from bad environmental outcomes Fat Tails 65 in one area by increasing output further? Then more GDP will no longer compensate so easily for worse climate damages. The usual logic around economic growth being able to make up for climate damages just got turned on its head: Richer societies tend to prefer a better environment more so than poorer ones. In this world, the higher we can expect future GDP to be, the more valuable it is to have done something about global warming pollution today. According to one study, if we assume that damages are additive rather than multiplicativethat food and iPhones aren't interchangeablethe "optimal" global average temperature increase is cut in half. If the standard, multiplicative version leads to around 4C (7F) of optimal eventual warming, making the simple change to additive damages will result in a final optimal temperature increase of below 2C (3.6F). That's an enormous difference, and it goes to show the importance of the assumptions that are feeding into models like DICE & Co. "Garbage in, garbage out," as the saying goes. Here it takes the form: "optimism in, optimism out." Feed a slightly different functional form through the most standard of climate-economy models, and the optimal climate policy can look very different. Once again, the inherent uncertainty is the biggest story. That goes for functional forms of the damage function as well as for lots of other factors. Even if we knew with certainty how emissions develop, how concentrations follow, how temperatures react, and how sea levels riseand we don'twe would still need to translate it all into dollars and cents. 66 Chapter 3 It's not useful to pick different types of extrapolations that deterministically project either 10 or 30 percent or more of economic damages by the time temperatures hit 6C (11F) above preindustrial levels. Rather, the correct approach is to do here what we just did for final temperature outcomes: look at the entire distribution of possible damages for each temperature outcome, not the expected damages conditional on any one temperature level. In other words, if temperatures were to go up 6C (11F), what's the probability of damages hitting 10 percent of GDP, or 30 percent, or any quantity in between or beyond? The problem is that we have no idea. There's always a small chance that any particular final temperature wouldn't cause any damage. There's also always a small chance that it would cost the world. The most likely outcome may well be somewhere in the middle maybe indeed somewhere in the 10 to 30 percent range for warming of 6C (11F)but that's not the point. Or at least it's not sufficient. It's a "guesstimate" at best, a guess at worst. Therefore, we simply can't give you another table the way we did for median temperature outcomes and probabilities of hitting 6C (11F). We don't know enough to fill in even the one row indicating average global damages at each temperature outcome. Bill Nordhaus's estimates around average expected damagesthat warming of 1C costs less than 0.5 percent of global GDP, 2C costs around 1 percent, 4C costs around 4 percentcould be a start. But even there, anything above around 2C (3.6F) is already largely guesswork. And we know much too little about the actual distribution of damages at each temperature level to estimate the third row for 50 percent or any other number for catastrophic impacts such as in table 3.2. Fat Tails 67 When it comes to high-temperature damages, the stateof-the-art economic models simply aren't much better than fitting a curve around what we know at low temperatures, and extending it into what we don'twell beyond the range of historically observed temperature increases into ones that mark uncharted territory for human civilization. Yet again, that's not to decry these modeling efforts. It's just to reiterate that inherent uncertainties will probably determine the final outcome. All that begs the philosophical question: is some number better than no number at all? If Nordhaus's estimate for the average global damages caused by final warming of 6C (11F) is 10 percent, and a simple exponential extrapolation gives us 30 percent, should we be using this 10-30 percent range at all? And what happens if we have fundamentally Table 3.2. Knowledge of economic damages decreases quickly with increased global average warming Final temperature 2C 2.5C 3C 3.5C 4C 4.5C 5C 5.5C 6C change (3.6F) (4.5F) (5.4F) (6.3F) (7.2F) (8.1F) (9F) (10F) (11F) Average 1% 1.5% 2% 3% 4% ? ? ? ?* global damages Chance of ?% ?% ?% ?% ?% ?% ?% ?% ?% damages >50% of economic output * Our range for average global damages from 6C (11F) of warming is 10 to 30 percent throughout the text, though that's hardly scientific enough to merit mention in this table. It's simply an extrapolation, using quadratic and exponential curves, from what we knowor think we knowhappens at 1 or 2C (1.8 or 3.6F). 68 Chapter 3 mis-specified damages because, for example, they affect growth rates rather than output levels, or because damages are inherently additive rather than multiplicative? But what's the alternative? If we didn't use these numbers in government benefit-cost analyses, we would essentially be accepting a climate damage estimate of zero. That's most definitely the wrong number. So better to go with the standard output of DICE and models like it. The U.S. government's $40 per ton figure is as good as any in that regard, even if still a likely underestimate. Let's at least run with it for now to illustrate another important point. HOW MUCH FOR A DEGREE OF WARMING ONE HUNDRED YEARS FROM NOW? Whether damages at 6C (11F) of eventual warming are 10 percent or 30 percent of global economic output or not anywhere near that range may be anyone's guess. The one thing we know for sure is that we ought to discount whichever number we get. The basic logic of discounting is sound, and ever present: It's a combination of delayed gratification and risk. Having $1 today is worth more than having it ten years from now. Answering the question of how much more seems to be as much an art as a science. But it doesn't need to be. In fact, there's a website for it. Go to treasury.gov to find the interest rate for what are commonly viewed as the least risky investments imaginable: U.S. government bonds. Lend the United States of America $100 today for up to thirty years, and see your investment grow each year at the rate displayed there. More specifically, you'd want to look Fat Tails 69 for Treasury Inflation-Protected Securities or TIPS. That way what you see is what you get in purchasing power. Inflation won't eat into your earnings. That rate has been hovering at around 2 percent a year for the past ten years. At the moment it's closer to 1 percent. Contrast that range with the central estimate of 3 percent that the U.S. government uses in its social cost of carbon calculations. Nordhaus, in his DICE model, arrives at a default value of around 4 percent. Lord Nicholas Stern, in his Stern Review on the Economics of Climate Change, used 1.4 percent. He was also roundly criticized for that low choice at the time. So what is the right discount rate? The short answer is that we don't know, but we are pretty sure the correct long-run discount rate should decline over time "toward its lowest possible value." That seems rather self-serving for anyone trying to argue for strong climate action today. A low discount rate implies that future climate damages will be more significant in today's dollars and, thus, favors strong climate action today. But there is, in fact, quite a bit of underlying science pointing in that direction. Once again, it's what we don't know that points mostly in one direction. The primary driver for low discount rates is uncertainty around the correct rate itself. Who knows what the discount rate should be a century or two from now? The less we know about the correct discount rate, the lower it ought to be. Since we know less about the right discount rate the further out we go, the rate ought to decline over time. So what exactly is that number? It's probably not 4 percent or 3 percent as currently used, but likely significantly lower, perhaps 2 percent or even less. It's so far in the future that we can't know for sure, but precautionary prudence dictates we should at least consider using low rates for long-term discounting. 70 Chapter 3 Any of these numbers, though, focuses on risk-free rates. It's what you get for sure, no matter what happens to the world around you. The entire point of worrying about climate change was the unpredictability of it all. Should each potential future scenario then be discounted at the same rate? CLIMATE FINANCE We could do a lot worse than look to finance for cues on how to discount the uncertain future. When in doubt, ask those who actually stand to lose money from their decisions. Bob Litterman has spent most of his career at Goldman Sachs, serving in the late 1990s as head of firm-wide risk management before moving to asset management. He has lived and breathed the Capital Asset Pricing Model (CAPM) his entire life. In fact, he developed a variant, the Black-Litterman Global Asset Allocation Model. It allows for asset pricing decisions without making assumptions about expected returns for each type of asset. The less we know, the better his model performs against the standard version. Litterman doesn't mince words when talking about the way some climate economists look at discounting: "They argue for high discount rates because of high opportunity costs for money, some estimate of the market return on capital. Waa!? If that was the sole criterion, why would anyone invest in bonds, ever? We learned in finance why this is wrong sometime in the 1960s." In fact, CAPM was developed in the 1960s, and it has one simple premise: If an investment's fortunes rise in tough economic times, it will be more valuable than an identical investment that rises Fat Tails 71 and falls with the market. That link between its returns and the returns of the market is called "beta." Low beta implies a weak link. A weak link increases the value of the investment. That, in a sense, is the only reason why anyone would invest in government bonds that pay 1 or 2 percent rather than earn an expected 7 percent in the stock market. A high overall return is good, but it's much less valuable if it pays off only in already good economic timesa high beta. U.S. government bonds have a low expected return but they also have a low beta. Many balanced investment portfolios include at least some bonds as a rainy-day fund for tough economic times. This comes with stark implications for climate policy: If we somehow think that climate damages are small and will be worse when the economy is strong, discount rates should be higher. It will be fine to live through episodes of extreme weather events, say, because the storms won't be all that bad, and they'll hit only when GDP is high. This is one view supporting high discount rates. On the other hand, if we believe that climate damages will be large and go hand in hand with times when the economy is doing poorly, discount rates should be low. That might be a world in which climate change implies more extreme heat days, which in turn decrease labor productivity and, thus, GDP. Or more directly, if there is around a 10 percent chance of a climate catastrophe that crashes economies and alters life as we know it, unless we change course, Finance 101 tells us that the discount rate on those damages far into the future should be lowmaybe even lower than the 1 to 2 percent rate applied to assessing risk-free bonds. How low? Nobody knows for sure, but this is where we need to take a quick detour into Finance 102. 72 Chapter 3 WALL STREET PUZZLES Despite all its sophistication, modern finance leaves us with a number of fundamental puzzles. The "equity premium puzzle" tops that list. Investing in U.S. stocks returns, on average, 5 percent more than investing in U.S. government short-term bonds. This simple fact has haunted economists for decades. Standard economic models simply can't replicate these basic facts. People ought not to be so risk-averse as to warrant that large a premium for investing in risky stocks. Yet they are. What gives? Daily stock prices are among the most well-known facts. Newspapers print them. Comprehensive databases are freely available online. So questioning the underlying data won't get us far. It's also tough to see how we could blame it on laziness, biases, or some other human quirks that may or may not contribute to the puzzle. There's a lot of money at stake, and most of it is managed by professionals, who should know better. The most natural place to look for the culprit then is economic theory itself. We know that each model simplifies reality. Do the standard models simplify too much for their own good? It turns out that introducing potentially catastrophic risks to the standard models explains and even reverses the equity premium puzzle: fat tails in action. Market outcomes aren't defined by the average fluctuations on a typical day. They are much more defined by what happens during extreme events, the kinds of things that should never happen but have since given us at least a week's worth of "black" days in the past a hundred and fifty years: from Black Monday in October 1987 to Black Friday in September 1869 to an entire Black Week in October 2008. Taking these sorts of catastrophic risks more seriously justifies Fat Tails 73 large equity premiums, the amounts of money investors need to be paid to take the risk. The same holds for climate risks. Potentially catastrophic climate events demand a "risk premium." The higher the chance of these catastrophes, the more we ought to seek out the climate-equivalent of risk-free government bonds: avoiding carbon emissions in the first place. There's one more complicating factor to this story, and it comes back to discount rates and the all-important beta. The reason anyone invests in government bonds is because of their low beta, which makes them pay off in all states of the world, including the bad. Standard asset pricing models value these investments by assigning a low, sometimes even negative discount rate. The latter comes into play, for example, with contrarian short sellers that earn more when the stock market is lower. That same insurance thinking ought to apply to climate damages, or rather to avoiding them. Bob Litterman, drawing the link to climate: "If the risk premium is large enough, then the insurance benefits could even require a negative discount rate and such a high current price of emissions that the price would actually be expected to drop over time as the problem diminishes and uncertainty is resolved." The point seems blindingly obvious from an asset pricing perspective. It will come as a surprise to most focused on the proper way of discounting in the climate arena, where tying discount rates to "opportunity costs" or expected "market returns" is common practiceand where Lord Nicholas Stern's 1.4 percent has long been considered the lower bound of what is an acceptable rate. But there is nothing magical about 1.4 percent, or about 1 percent. Even 0 percent doesn't need to be a lower bound in theory. It isn't to those shorting Wall Street. If investing in 74 Chapter 3 a project pays more in the hardest of economic times, the proper discount rate may need to be lower than the lowest risk-free rate. We are not at all sure this is the relevant case for climate change, but it is a definite possibility, representing yet another big uncertainty. With a large chance of catastrophea 10 percent chance of eventually hitting 6C (11F), sayand with that catastrophe associated with large economic costs10 or 30 percent (or even much more) of global economic outputthe proper treatment of these climate damages is to discount them at a rate maybe even lower than the risk-free rate of government bonds. As always, it's tough to pick a single number, but it makes it harder to argue for discount rates much above 1 or 2 percent. TIMING IS EVERYTHING We keep saying "eventual" in connection with warming of 6C (11F) and other such extreme scenarios, because any of these catastrophic temperature increases would play out over many decades and centuries. Large global average temperature increases won't happen tomorrow; nor would catastrophe strike overnight, at least not based on this calculation. In fact, the higher the final temperature increase, and the higher the chance of ultimate catastrophe, the longer it will take for both to materialize. That points to one of the more profound characteristics of climate change: its long-term nature. But it clearly doesn't mean that we can relax for the time being. If a civilization-as-we-know-it-altering asteroid were hurtling toward Earth, scheduled to hit a decade hence, and it had, say, a 5 percent chance of striking the planet, we would surely pull out all the stops to try to deflect its path. Fat Tails 75 If we knew that same asteroid were hurtling toward Earth a century hence, we may spend a few more years arguing about the precise course of action, but here's what we wouldn't do: We wouldn't say that we should be able to solve the problem in at most a decade, so we can just sit back and relax for another 90 years. Nor would we try to bank on the fact that technologies will be that much better in 90 years, so we can probably do nothing for 91 or 92 years and we'd still be fine. We'd act, and soon. Never mind that technologies will be getting better in the next 90 years, and never mind that we may find out more about the asteroid's precise path over the next 90 years that may be able to tell us that the chance of it hitting Earth is "only" 4 percent rather than the 5 percent we had assumed all along. That last point increased certainty around the final impactsis precisely where climate change has proven so vexing. Our estimate of the range of climate sensitivity isn't any more precise today than it was over three decades ago. And the chance of eventual climate catastrophe isn't 5 percent; our rough calculation based on IEA projections shows that it's likely closer to 10 percent or even more. WHAT'S YOUR NUMBER? Climate change is beset with deep-seated uncertainties on top of deep-seated uncertainties on top of still more deep-seated uncertainties. And that's just for going from emissions to concentrations to final temperatures. Further uncertainties prevent us from simply translating temperatures into economic damages, and none of that yet clarifies the uncertainties around the correct discount rates to calculate the optimal carbon price today. In each of these 76 Chapter 3 steps, though, one thing is clear: because the extreme downside is so threatening, the burden of proof ought to be on those who argue that fat tails don't matter, that possible damages are low, and that discount rates ought to be high. As little as we know about many of these uncertainties, we do know that the chance of eventual catastrophic warming of 6C (11F) or more isn't zero. It's slightly greater than around 10 percent under our conservative calibration. Associated damages are anyone's guess, but we can only consider the implied "guess" of 10 percent of global economic output ventured by Bill Nordhaus's DICE model a lower bound. Following the same, admittedly imperfect logic could yield estimates anywhere from 10 percent to 30 percent or even well beyond. We don't know where in that range the true number is. We are pretty sure it's not less than 10 percent, and we do know that no one else knows the true number either. The most relevant question isn't whether expected damages at 6C (11F) are 10 or 30 percent of global economic output. The question ought to be: what is the full distribution of damages and what is the chance of significant economic collapse? That leaves discounting, where at least we know that looking for an expected market return on capital to arrive at a discount rate of, say, 4 percent may be turning a blind eye on decades of asset pricing theory and practice. If we omit the rosy scenario where climate damages are small and will be worse when the economy is strong, we are looking at much lower rates than are currently bandied about. We don't know whether the right rate should be 2 percent, 1 percent, or even below. There may not be a single climate betathe link between climate damages Fat Tails 77 and the general health of the economyto justify using any one particular rate. But we can be pretty sure that the presence of big uncertainties around high final temperatures and catastrophic damages should drive discount rates down, not up. A rate of 2 percent might be our estimate for damages fifty years hence, and whatever rate it is, it ought to decline over time. Where does all of that leave us? First, with the realization that it's easy to criticize. It's tougher to come up with a constructive alternative. Table 3.2, showing actual climate damages, is mostly blank for a reason, and it's not for lack of trying. If the question is what single number to use as the optimal price of each ton of carbon dioxide pollution today, the answer should be: at least $40 per ton of carbon dioxide, the U.S. government's current value. We know it's imperfect. We are pretty sure it's an underestimate; we are confident it's not an overestimate. It's also all we have. (And it's a lot higher than the prevailing price in most places that do have a carbon price right nowfrom California to the European Union. The sole exception is Sweden, where the price is upward of $150. And even there, key industrial sectors are exempt.) If the next question is how to decide on the proper climate policy, the answer is more complex than our rough benefit-cost analysis suggests. Pricing carbon at $40 a ton is a start, but it's only that. Any benefit-cost analysis relies on a number of assumptionsperhaps too manyto truly come up with one single dollar estimate based on one representative model of something as large and uncertain as climate change. Since we know that fat tails can dominate the final outcome, the decision criterion ought to focus on avoiding the 78 Chapter 3 possibility of these kinds of catastrophic damages in the first place. Some call it a "precautionary principle"better safe than sorry. Others call it a variant of "Pascal's Wager" why risk it, if the punishment is eternal damnation? We call it a "Dismal Dilemma": while fat tails can dominate the analysis, how can we know the relevant probabilities of rare extreme scenarios that we have not previously observed and whose dynamics we understand only crudely at best? The true numbers are largely unknown and may simply be unknowable. In the end, it's risk managementexistential risk management. And it comes with an ethical component. Precaution is a prudent stance when uncertainties about catastrophic risks are as dominant as they are here. Benefit-cost analysis is important, but it alone may be inadequate, simply because of the fuzziness involved with analyzing hightemperature impacts. Climate change belongs to a rare category of situations where it's extraordinarily difficult to put meaningful bounds on the extent of possible planetary damages. Focusing on getting precise estimates of the damages associated with eventual global average warming of 4C (7F), 5C (9F), or 6C (11F) misses the point. The appropriate price on carbon is one that will make us comfortable enough to know that we will never get to anything close to 6C (11F) and certain eventual catastrophe. Never, of course, is a strong word, since we know the chance of any of these temperatures happening even based on today's atmospheric concentrations can't be brought to zero. One thing we know for sure is that a greater than 10 percent chance of eventual warming of 6C (11F) or Fat Tails 79 morethe end of the human adventure on this planet as we now know itis too high. And that's the path the planet is on at the moment. With the immense longevity of atmospheric carbon dioxide, "wait and see" would amount to nothing other than willful blindness. In The Opinion of Two Economists Musing on: Willful Blindness Chapter 04 Seven Billion People and Untold Future Generations v. Those Standing in the Way of Sensible Climate Action On Writ of Certiorari to the Court of Public Opinion The willful blindness doctrine is old news in criminal defense cases. Just because you turned your back when your partner pulled the gun on the bank teller doesn't mean you won't be held accountable for aiding in the robbery. That doctrine has since reached beyond the sphere of criminal law. The U.S. Supreme Court, for example, has applied the principle to patents. It took up a case about an "innovative deep fryer": Global-Tech Appliances, Inc., et al. v. SEB S.A., 563 No. 10-6 (May 31, 2011). In the Supreme Court's words: "[P]ersons who know enough to blind themselves to direct proof of critical facts in effect have actual knowledge of those facts." More specifically, the Court further adds two basic requirements for willful blindness that "all agree on": "First, the defendant must subjectively believe that there is a high probability that a fact exists. Second, the defendant must take deliberate actions to avoid learning of that fact." Ibid. The key words are "high probability" and "deliberate action." Little is ever certain. Demonstrating willful blindness requires showing that the defendant must have been knowledgeable enough to realizewith high probabilitythat something's happening. And then, said defendant must have taken deliberate actions to avoid acting on that knowledge. Willful Blindness 81 Shift from criminal defense to climate change, and from Supreme Court legalese to the colloquial interpretation of what it means to be "willfully blind." Climate change is bad. Not acting on it makes it worse. After decades of science and years of public discussion saying as much, there's no other way than to call those opposingor outright denyingthis reality "willfully blind." Some may simply be "blind." But for many, motivated reasoning leads to conclusions in direct conflict with science. It's tempting to stop there. The debate around whether to act should be over. To some extent, even the debate around how to act is over. Yes, there are some academic and also practical disagreements around what to dowhether to tax or cap carbon, how to put either into practice, or how to approximate a price on carbon using other policies like fueleconomy standards for cars or carbon pollution standards for power plants. Some of these policieslike performance standardsmay well have merit on their own, but they do not add up to enough. The ultimate goal is clear: price carbon. None of this is a secret. Dare we say that anyone who pretends otherwise is, once again, willfully blind to reality? The question we are left with is: how high should that price on carbon be? This is where there is room for actual discussion, academic and otherwise. To be clear, that discussion cannotmust notexcuse inaction now. We do know for a fact that the prevailing current price of close to zero in most countries, with some notable exceptions, is much too low. The central estimate of the U.S. government's comprehensive review of the "social cost of carbon" is a price of around $40 per ton of carbon dioxide. That $40 per ton, however, can be only the starting point. Most of what we know about the science seems to 82 Chapter 4 point to the fact that the number should be higher. Most everything we don't know pushes it higher still. A serious look at the host of uncertainties involved in calculations that arrive at the $40 figure makes that clear. The past chapter's "fat tails" may well dominate all else. So how high should it go? LESS THAN INFINITYINFINITELY LESS If the risk of catastrophe is sufficiently high, and the catastrophe itself is sufficiently bad, it's tempting to conclude that we ought to avoid it at all cost. Don't stop at $40, or $400, or even at $4,000 per ton of carbon dioxide. If the catastrophe is infinitely bad, the optimal cost of each ton of carbon dioxide pollution beyond any thresholds that triggers the catastrophe would be infinity as well. This is true even if there's, say, "only" a 10 percent chance of catastrophe. Infinity times any number is still infinity. The precise mathematical verdict then would be to spend all the money in the world to avoid that outcome. Or in practical terms: ban the intentional release of carbon dioxidethe burning of coal, oil, and gas as well as deforestationaltogether. Stop driving cars with an internal combustion engine. Ground all commercial planes. Turn off fossil-fueled power plants. Stop modern life as we know it. That can't be the right policy prescription. For one, even if we managed to decrease the chance of climate catastrophe from 10 percent to perhaps 1 percent, there's still an infinite cost to be avoided: 1 percent times infinity is still infinity. Standard benefit-cost analysis falls apart as soon as we introduce the term "catastrophe" and describe it as infinitely costly. Willful Blindness 83 Little, of course, is ever truly infinitely costly. Not even death is. The "value of a statistical life" may appear as heartless a calculation as the name implies, but the logic is unassailable. When making small, everyday decisions like whether to wear a seatbelt, or bigger ones like which job to get, people don't value their lives infinitely much. If your chance of dying on the job in one profession versus another is twice as high, wages may be appropriately higher, but not infinitely so. It may be a bit of a stretch to apply the value of an individual statistical life to a planetary catastrophe, but the analogy holds. It's not hard to see that eventual global warming of 6C (11F) would be nothing short of catastrophic, destroying nature and civilization as we now know them. That doesn't yet mean we should throw infinite amounts of money at solving the problem. We ought to find a sensible balance between overreaction and inexcusable inaction. One possible guide is to look at the risk of catastrophe itself. Shortly after 9/11, Vice President Dick Cheney postulated that, "if there's a one percent chance that Pakistani scientists are helping [Al Qaeda] build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response." That, in fact, is a false equivalency. One percent isn't certainty. Instead, one ought to consider as a crucial metric the probability of the event itself. An existential risk with a truly tiny chance of occurring deserves less of our attention than one with a 10 percent or even 1 percent probability. We would be lucky if the probability of existential climate risk were only 1 percent, as in Mr. Cheney's Al Qaeda scenario. We have seen with our own rather conservative calculation that the risk of having eventual average global temperatures go up by more than 6C (11F) is about 10 percent, ten times higher. 84 Chapter 4 By now we are running up against another possible constraint: Climate change isn't the only potential catastrophe hitting the planet. What if an asteroid struck the planet and wiped out civilization long before the worst effects of climate change began to show? Or what about a pandemic? Or the chance of nuclear terrorism? Or biotechnology, nanotechnology, or robots gone amok? How should we allocate limited amounts of money to each of these existential risks? WORST-CASE SCENARIOS Opinions differ on what should rightly be called an "existential risk" or planetary-scale "catastrophe." Some include nuclear accidents or terrorism. Others insist only nuclear war or at least a large-scale nuclear attack reaches dimensions worthy of the "global" label. There are half a dozen other candidates that seem to make it on various lists of the worst of the worst. As we shall soon see, it's tough to come up with a clear order. In addition to climate change, let's consider, in alphabetical order, asteroids, biotechnology, nanotechnology, nukes, pandemics, robots, and "strangelets." That might strike some as a rather short list. Aren't there hundreds or thousands of potential risks? One could imagine seemingly countless ways to die in a traffic accident alone. That's surely the case. But there's an important difference. While traffic deaths are tragic on an individual level, they are hardly catastrophic as a class. Every entry on our list has the potential to wipe out civilization as we know it. All of the worst-case scenarios are global. All are highly impactful and mostly irreversible in human timescales. Most are highly uncertain. Willful Blindness 85 Only twoasteroids and climate changeallow us to point to history as evidence of the enormity of the problem. For asteroids, go back 65 million years to the one that wiped out the dinosaurs. For climate, go back a bit over three million years to find today's concentrations of carbon dioxide in the atmosphere and sea levels up to 20 meters (66 feet) higher than today. In the final analysis, climate change is far from the only potential catastrophe humanity ought to be worrying about. Others, too, deserve more attention, and funding. Though that doesn't apply to all. Strangelets are something straight out of science fiction: stable strange matter with the potential of swallowing the Earth in a fraction of a second. They have never been observed. It may be theoretically impossible. If it is possible, though, there may be a chance that large heavy-ion colliders like CERN could create them. That has prompted research teams to calculate the likelihood of a strangelet actually happening. Their verdict: negligible. Concrete numbers hover between 0.0000002 percent and 0.002 percent. That's not zero, but it might as well be. It's certainly nowhere near the 10 percent chance of ultimate climate catastrophe on our current path. So yes, swallowing the entire planet would be the ultimate badclearly worse, say, than melting the poles and raising sea levels by several meters or feet. Stranger things have happened. But strangelets very, very, very likely won't. Probabilities matter. Those problems that are truly unlikely should not command a lot of society's attention. Quantum physics tells us there's an infinitesimally small chance of our planet jumping off its current path around the sun and shooting out into space. It's most definitely not a possibility humanity should spend any time worrying 86 Chapter 4 about. Strangelets are slightly different in that they could, in fact, be a human creation. But that still doesn't mean they deserve society's attention. The chance is simply too small. If we could rank worst-case scenarios by how likely they are to occur, we'd have taken a huge step forward. If the chance of a strangelet is so small as to be ignorable, probabilities alone might point to where to focus. But that's not all. The size of the impact matters, too. So does the potential to respond. Asteroids come in various shapes and sizes. We began this book by looking at the one that exploded above Chelyabinsk Oblast in February 2013. The impact injured 1,500 and caused some limited damage to buildings. We shouldn't wish for more of these impacts to happen just for the spectacular footage, but we'd be hard-pressed to call an asteroid of that size a "worst-case scenario." It's not. NASA's attempts at cataloguing and defending against objects from space aims at much larger asteroids, the ones that come in civilization-destroying sizes. Astronomers may have been underestimating the likelihood of Chelya binsk Oblast-sized asteroids all along. That's a problem that needs to be rectified, but it's not a problem that will wipe out civilization. If we estimated the likelihood of a much larger impact incorrectly, the consequences could be significantly more painful. Luckily, when it comes to asteroids, there's another feature working for us. Science should be able to observe, catalogue, and divert every last one of these large asteroidsif sufficient resources are provided. That's a big if, but not an insurmountable one: a National Academy study puts the cost at $2 to 3 billion and ten years' research to launch Willful Blindness 87 an actual test of an asteroid deflection technology. That's much more than we are spending at the moment, but the decision seems rather easy: spend the money; solve the problem; move on. Now we are down to biotechnology, nanotechnology, nukes, pandemics, and robots as contenders against which climate change needs to be measured. Or does it? One response to any list like that would be to say that each such problem deserves our (appropriate) attention, independently of what we do with any of the others. If there's more than one existential risk facing the planet, we ought to consider and address each in turn. A typical homeowner insurance package will protect against fire damage. If your home sits near a fault line, you may also want to buy earthquake insurance. Those with additional flood risk buy flood insurance as well. And so on. The same should go for catastrophe policy. That logic has its limits. If catastrophe policies were to eat up all the resources we have, we'd clearly have to pick and choose. But we don't seem to be anywhere close. A first step then should always be to turn to benefit-cost analysis, which in turn is something that every U.S. p

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Human Resources In The Urban Economy

Authors: Mark Perlman

1st Edition

1317332474, 9781317332473

More Books

Students also viewed these Economics questions

Question

Develop a preliminary focus for your research.

Answered: 1 week ago

Question

1. To generate a discussion on the concept of roles

Answered: 1 week ago