Microsoft Word - SchneiderFinalUnformatted.doc 1 Climate Change: Do We Know Enough for Policy Action? Stephen H. Schneider Stanford University, Stanford, California, USA This paper appeared in the journal, Science and Engineering Ethics, Volume 12, No. 4, 2006, pp. 607-636, publisher: Opragen Publications; www.opragen.co.uk. An earlier version of this paper was presented at a conference on “Issues of Risk and Responsibility in Contemporary Engineering and Science: French and U.S. Perspectives,” organized by the France-Stanford Center for Interdisciplinary Studies and held at Stanford University, April 7-8, 2003. Keywords: climate change; “dangerous” climate change; risk management; climate policy ABSTRACT: The climate change problem must be thought of in terms of risk, not certainty. There are many well-established elements of the problem that carry considerable confidence whereas some aspects are speculative. Therefore, the climate problem emerges not simply as a normal science research issue, but as a risk management policy debate as well. Descriptive science entails using empirical and theoretical methods to quantify the two factors that go into risk assessment: “What can happen?” and “What are the odds?” (Probability x Consequences). Policymakers should, in turn, take that information and use it to make value judgments about what is safe, what is dangerous, what is fair. To make these judgments, policymakers need to know the probabilities that experts assign to various possible outcomes in order to make risk management decisions to hedge against unsafe, dangerous and unfair outcomes. The climate debate needs to be reframed away from absolute costs—or benefits—into relative delay times to achieve specific caps or to avoid crossing specific agreed “dangerous” climate change thresholds. Even in most optimistic scenarios, CO2 will stabilize at a much higher concentration than it has reached today, and temperature will rise accordingly. It will take even longer for sea level rise from thermal expansion and the melting of polar ice to occur, but what is most problematic is that how we handle our emissions now and in the next five decades preconditions the sustainability of the next millennium. We must think of climate change in terms of risk, not certainty. Let me begin by saying that we are not talking primarily about certainties, but rather about risks. The ozone problem, the climate problem, and, in fact, almost all interesting socio- technical problems are filled with deep uncertainties: uncertainties that are not resolved today and may not be resolved to a high degree of confidence before we have to make decisions regarding how to deal with their implications. They often involve very strong and opposite stakeholder interests, and they involve high stakes. In fact, sociologists Funtowicz and Ravetz1 have called such problems examples of “post-normal science.” When involved in what Thomas Kuhn calls “normal science,”2 we scientists go to our labs, do our usual measurements, calculate our usual statistics, build our usual models, and proceed within a particular well-established paradigm. Post-normal science, on the other hand, acknowledges that while we’re doing our normal science, some groups want or need to know the answers well before normal science has resolved the deep inherent uncertainties surrounding the problem at hand. 2 They have a stake in the outcome and want some way of dealing with the vast array of uncertainties, which, by the way, are not all equal in the degree of confidence they carry. There are many components of the climate problem that are well-established— many aspects that we have considerable confidence in—and then there are those that are speculative, and they get mixed together in the media and the political debate. This mixing together of aspects which carry varying degrees of confidence is too often done on purpose; proponents of either side of the climate change debate (i.e., ignore climate change versus stop it cold) deliberately select information out of context to support ideological positions and their or their clients’ interests. The climate change debate—particularly its policy components—fall clearly into the post- normal science characterization and will likely remain there for decades, which is the minimum amount of time it will take to resolve some of the larger remaining uncertainties like climate sensitivity levels or the likelihood of abrupt non-linear events like a shut off of the Gulf Stream in the high North Atlantic. In these situations, normal scientific endeavors are distorted by the salience of the problem and the political use—and abuse—of each incremental new result. So, the climate problem emerges not simply as a normal science research issue, but a risk management policy debate as well. Risk is classically defined as ‘probability x consequences’. We need both factors. Descriptive science, what we like to call our “objective” purview, entails using empirical and theoretical methods to come up with the two factors that go into risk assessment: a) What can happen? and b) What are the odds? Both are essential. But then, it’s not as simple as it sounds. It’s very easy to assess what can happen and what are the odds of it happening when we’re talking about rolling dice, playing cards, or flipping coins. Those activities all involve objective probabilities from which frequencies can be derived to determine the likelihood of any specific outcome. However, in climate change, we’re generally talking about future events, and there are no empirical methods that we can use to objectively determine what will happen in the future. Our empirical data is only about the present and the past, and therefore, the best way we can simulate the future is by constructing a systems theory—built, of course, by aggregating empirically derived sub-models. However, the full integrated systems model is not directly verifiable before the fact (i.e. until the future happens and proves it right or wrong), and thus only subjective methods are available. The systems model can be evaluated based on its accuracy in predicting certain (already-known) outcomes—like its ability to simulate the climatic cooling following an explosive volcanic eruption (for a climate model) or the effect of a price shock on the consumption of oil or the rate at which the price rise might have induced technological efficiency (for an economic model). But, these surrogate whole-system model “verification” exercises are still not fully objective since they rely on structural assumptions about future conditions and processes—and those are necessarily subjective, even if they’re expert-based and built initially on empirical work. The degree of confidence we may assign to any assessed risk is always subjective, since probabilities about future events necessarily carry some subjectivity. That doesn’t mean it is not an expert-driven assessment, but it is still subjective. So, the big question we’re left with is: What probabilities and from whom? Then, there are the normative judgments, or the value judgments: What is safe? What is dangerous? The 1992 UN Framework Convention on Climate Change (UNFCCC), which was signed by the senior Bush and the leaders of 166 other countries and entered into force in 1994 (currently the UNFCCC has been ratified by 189 nations), essentially stated that it is the job of the Framework Convention to achieve “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system”3— 3 although nobody knows precisely what that means! “Dangerous” is a value judgment that depends upon the assessment of the probabilities and consequences we just discussed. We scientists can provide a range of scenarios and even assign subjective likelihoods and confidence levels to them, but it’s up to policymakers to decide what risks are acceptable—what is dangerous and should be avoided and what course of action should be taken or not taken. The other major question in the climate change debate is: What is fair? If you’re going to do a cost-benefit analysis to determine the least expensive way to get the maximum amount of climate abatement, it may be that in the “one dollar, one vote” world that cost-benefit methods typically imply, some action—passive adaptation, for example—might be cheapest. But here’s the dilemma: A rich country that has historically produced large emissions of greenhouse gases (GHGs) may likely find it cheaper (at least for a few decades before the impacts become too severe for cost effective adaptation) to adapt than to mitigate by retiring, before their useful economic lifetime is over, a few coal-burning power plants, for example. On the other hand, a poorer country in the hotter equatorial area with fewer resources (and thus less adaptive capacity) might be both more harmed by the climate change and also unable to pay for or otherwise deal with the damages because it lacks the same degree of adaptive capacity as the richer country. Thus, adaptation might seem cheaper and more effective in a cost-benefit analysis that uses high discount rates and aggregates all costs and benefits into equivalent dollars (since the rich country, with a much larger share of world GDP, will be able to adapt more easily—the 2003 European heat wave and Hurricane Katrina notwithstanding—but that policy may not be fair in its distribution across rich and poor countries, which leads to alternative political views of what should be done and who should pay to abate risks. These equity/efficiency trade-offs are inherent in the ozone problem as well; they’re just multiplied by a larger factor in dealing with climate change. Do the media accounts accurately represent the scientific debate? Contrast the media debate with the science debate. On the one extreme, in the media, we can have the kind of “high-quality” work, featured from time to time in supermarket tabloids, which proclaims, “Nostradamus Predicts Hottest Summer in History,” with a caricature of the French Renaissance “seer.” For those who may laugh, remember how many people look at these stories compared with the number who have seen the entire body of work that climatologists have written. What I do like about Nostradamus, as he appears in these stories, is that he is often shown with what all seers must have: a crystal ball. I’m very jealous because his cartoon crystal ball is clear, unlike most of ours, depicted metaphorically below. 4 Photograph: S.H. Schneider Climatologists also have to make forecasts about the future state of the world, and we do it by piecing together data utilizing all the tools that are available to us to construct climate models. We use normal science, that is, empirical methods, to try to build our understanding of many sub-disciplines. We try to determine what types of technologies will exist in the future and how much of each of several greenhouse gases will be emitted, and then we have to figure out what that does to the climate by looking at biogeochemical cycles. Each one of these sub-systems is worked on empirically, and we have hard data and evidence we can use to construct them, but when we put them all together to forecast the future, of course, there is no empiricism about what will happen in 2100, nor is there very clear empiricism about how to test our hazy crystal ball. Therefore, this is an issue that is ripe for people to select their preferred happy or unhappy outcome totally out of context, claim that particular outcome to be the one and only truth, and then find the media and friendly politicians willing to lend an ear—or a voice—to trumpet such “truth.” So what are the elements we need to look for in our crystal ball exercise of making quantitative future projections? At the outset, it is the amount of GHGs people will throw into the air. A significant part of those emissions come from vehicles. So, we have to estimate what vehicles people will drive, how far they will drive them, etc. The U.S. emits about 15% more CO2 than it did in 1990, and Sport Utility Vehicles (SUVs) are a good part of the problem. Even so, a Wall Street Journal editorial (March 12, 1998) entitled, “Large Vehicles Are the Solution, Not the Problem,” declares that denying people the opportunity to have free access to SUVs and enjoy the personal safety of driving a heavier vehicle than most other motorists is to fly in the face of individual freedoms. The editorial board uses technical arguments when, in fact, they’re taking an ideological position, the position being that defending individual rights is more important than worrying about the collective side-effects of the tailpipes of these “dinosaurs.” The board also makes simple misstatements about safety: The SUV rollover accident rate is actually high enough that the extra cushioning they provide in a two-car collision doesn’t make up for the added rollover 5 risk, to say nothing of the fact that large vehicle drivers endanger the majority of other motorists driving more sensibly sized, less polluting cars. In fact, going gargantuan creates a sort of “tragedy of the commons” by providing people with a perverse incentive to get bigger and heavier as a defensive move to counter the early adopters of big and heavy vehicles—a questionable practice with regard to socially responsible action which is not reflected in the price of SUVs. Also, SUVs create bigger imported oil balance of payments deficits and even may play a role in having to defend access to oil with massive amounts of blood and expense—none of which is included in the highly subsidized price of these leviathans of the pavement that belong in the Australian outback or on snow-packed high mountain roads, not clogging city streets or commuter freeways in regions with temperate climates. Emissions and climate scenarios. Now in the US we have even bigger gas guzzlers, road hogs, and vision blockers being thrust on the market: Hummers. I strongly suspect that the Iraq war, in which Hummers were seen in cavalry convoys on the news every night, has been free advertising for these “Hum-Vees.” In fact, there has been a move in the U.S. Congress and parts of the Bush administration that has some of us shaking our heads because it is so transparently absurd: to allow people to take tax deductions for buying Hummers. You pay $55,000 for this oversized non-car, you endanger other people in the streets if you crash into them, you emit obscene amounts of tailpipe emissions per mile, and instead of there being a very high tax to discourage this anti-social behavior, our politicians provide an incentive—a $25,000 tax deduction! In any case, the picture below which shows graffiti on the wall in a Washington, D.C. subway ad, makes the point about as well as the old Simon and Garfunkel lyrics from the Sounds of Silence: “The words of the prophets are written on the subway walls…” The ad shows a Hummer on a snowfield and says, “Does well at the poles.” This editorial graffiti inserts “MELTING” to read, “Does well at melting the poles.” (And this is indeed true.) Photograph: S.H. Schneider One way to reduce GHG emissions from vehicles is to buy more efficient cars like my hybrid Honda Civic (pictured here in front of my house in California) which gets about twice the mileage of the regular Civic—and, of all things, it gets a positive incentive from the political world: we received a $2,000 rebate for this efficient vehicle (contrast that to the $25,000 tax deductions for vehicles that are over 6000 pounds—a textbook example of a “perverse subsidy”). But, even if everyone bought these hybrids, there still would be an increase in vehicular greenhouse gas emissions 6 in the future if people in the world who currently do not have personal transportation vehicles start to join the market in large enough numbers. However, the rise in emissions would be substantially less than if the world continued to follow the Victorian industrial technology route, reproducing gas-guzzlers rather than hybrids, plug-in hybrids or fuel cell powered electric vehicles. Photograph: Terry Root What else that might happen in the future must we estimate to project climate change? In order to know what will happen, we have to forecast what kind of energy systems will be in use: Will it be wind energy? People like to say yes. Windmills are a very attractive alternative to fossil fuels. Their cost of construction is now comparable to, if not less than, almost every other form of energy. The problem is that they’re an intermittent producer of power, so at the margin, a small amount of them is very efficient—you just plug them into the power grid and boost the fraction of electricity produced by renewables. But, if you start to replace more than 5% or 10% of the existing energy system with windmills, then you have to deal with storage and transmission issues. You begin to find that when you do marginal cost economics, a source like wind energy is fine in the beginning, but when it goes from being a marginal to a non-marginal source of energy, you can’t use the same rules. Non- marginal change is very difficult for many people inside of the cost-benefit world to deal with, yet that’s exactly what we have to do when we play Nostradamus and look into our hazy crystal ball. We have to build systems models that include those cost changes over time. And we have to consider what the policy world that wants to create a reduction in the use of conventional technology does to stimulate private investments to invent better systems or to reduce unit costs via “learning by doing” experience—which some call induced technical change. That’s a major feedback included in virtually no one’s model. Larry Goulder and I4 did some work on this in an energy economy model, but at the moment, it’s in very few economic models for climate change policy work, and even when it is, most, including ours, are fairly simple treatments. 7 Photograph: S.H. Schneider At this point, it’s necessary to cite an equation from Yoichi Kaya5,6 to remind us of what we have to do to forecast future levels of greenhouse gases, to say nothing of other gases. So, let’s look at CO2 emissions and break it into four terms. [This is a modification of the Ehrlich and Holdren7 I=PAT population multiplier from 35 years ago.] [CO2 Emissions](t,x) = Population(t,x) x [GDP/capita](t,x) x [Energy/GDP](t,x) x [Carbon/Energy](t,x) Emissions at a time in the future, t, and the region, x (x could be the whole globe, or it could be one place like California), is a product of four things (this is true by definition, as it is an identity): 1) Population at that time and place; 2) The affluence as measured by gross domestic product (GDP), which is not the only measure of well- being, but it is a typical one, and I don’t know any politician from the left to the right that is against increasing that number in their jurisdiction; 3) Energy per unit GDP, a very important term, also called energy intensity which is the amount of energy it takes to produce a unit of GDP; and 4) Carbon produced per unit of energy, the so- called carbon intensity. Now, obviously, which has higher energy intensity: moving heavy logs around in diesel trucks or moving electrons around in the micro-chips of computers? Clearly, the transition of society away from energy-intensive industries like logging and mining, and towards information technology and high-tech products, reduces—and has been reducing for decades at a percent or two per year—this energy intensity number. Since energy intensity is in a multiplier for emissions, it’s a very important component of the equation. If you are an organization that builds diesel trucks or logs forests, you don’t like the idea of reducing this number at all. So, what ends up happening is, if we have a political push to reduce energy intensity and emissions at the same time, there will be blocking coalitions created by the people who prefer doing things the old way. They’re very well-organized, and they’ll do everything they can to protect their interests. While reducing energy intensity will create new jobs and new industries and will benefit the public at large, these beneficiaries are not generally aware of their potential good fortune and thus are not yet politically organized. So, we end up with a highly visible group of people who are strongly opposed to dropping that energy intensity number, and they cannot easily be countered by the rest of the members of the general public, who sort of vaguely agree 8 with reducing energy intensity, and say so in opinion polls, but don’t appear to be passionate about it in the way they vote (or contribute to political campaigns), at least not yet. This brings us to the final technological factor, carbon per unit of energy, or carbon intensity which is highest for coal and synfuels produced from coal or natural gas and lowest (zero) for nuclear energy and renewables. It’s directly related to fossil fuel energy intensity: The higher the fossil fuel components burned—particularly coal— the higher the carbon emissions. Some have proposed capturing carbon dioxide at the power station and sequestering it underground—a feasible technique in limited quantities—but the extent to which this will safely and effectively store a trillion tons of carbon underground over the rest of the century is still a large unknown—as is the extra cost per kilowatt of electricity generated with a carbon capture and sequestration (CCS) component. Climatologists have to make projections of all of these factors in the Kaya equation, and then we can forecast what might happen. But, as mentioned, these factors are all interrelated. If you have a large GDP per capita, you have more money available to reduce energy intensity and carbon intensity. So, if you’re going to predict the future, you can’t just look at these factors independently. You actually have to look at them through an economic model and a social model, which accounts—however crudely—for their interactions. Then, our crystal ball only gets hazier. Such scenario building is what the IPCC—the Intergovernmental Panel on Climate Change—has to do. The emissions scenarios group who produced the Special Report on Emissions Scenarios called SRES8 has focused on this. Figure 1: Past and future CO2 Atmospheric Concentrations. Atmospheric CO2 concentration from year 1000 to year 2000 from ice core data and from direct atmospheric measurements over the past few decades. Projections of CO2 concentrations for the period 2000 to 2100 are based on the six illustrative SRES scenarios and IS92a (for comparison with the SAR). (IPCC, 2001: Figure 9-1a.)9 On the x-axis of this graph, we see the past thousand years. The present is the vertical bar labeled “Direct Measurements,” and to the right there’s a vertical zone labeled “Projections” for a hundred years into the future. For 1,000 years, carbon dioxide concentration was about 280 parts per million (ppm). The Industrial Revolution began, and the Victorian industrial revolution followed it, and that’s when CO2 started 9 to really take off. Then, we can see the upward trend in the twentieth century. There’s over 30% more CO2 in the atmosphere now than there was in pre-industrial times. This is not a speculation; this is very well-established scientifically. It is also not a speculation that we’re responsible for this buildup. There are masses of good data supporting this: We can count anthropogenic emissions by looking at the ratio of carbon-14 to carbon-12, for example. We notice carbon-14 decreases, and this is because fossil fuels have no carbon-14 (as they’ve been in the ground for tens of millions of years) but do emit carbon-12. It’s absolutely clear—as certain as one can be in atmospheric science; we know these numbers, and we know that we are responsible for the emissions. Now, let’s look at the future. What the SRES group did is to come up with six scenarios. In the most severe case, A1FI, CO2 triples by 2100, and for a high- technology variant of it, A1T, CO2 “only” doubles. Let me spend a short time defining what the scenarios are. I cannot explain each one in detail, as I have more important points to make, but I’ll give a quick summary with a few quotes from the report. The A1 story line is a very popular one. Nobody knows the precise future, and any one scenario strictly has a zero probability because it’s such a narrow line and there are so many possible outcomes. At any rate, the SRES group presents storylines that have scenarios built into them, and they could—and I believe, should, though this is very controversial—have assigned probabilities to each storyline, but I’ll discuss that later. The A1 storyline and scenarios describe a future world of very rapid economic growth, with global population peaking in the mid century and declining thereafter. It assumes the rapid introduction of new and more efficient technologies. Major underlying themes are convergence between regions, capacity-building, increased cultural and social interactions, and substantial reduction in regional differences and per capita income, but still a globalized world that mobilizes capital to where it’s cheap and does not protect domestic industries just because they exist. Nor does it have any climate prevention policy. The Emission Scenarios of the Special Report on Emission Scenarios (SRES) A1: The A1 storyline and scenario family describes a future world of very rapid economic growth, global population that peaks in mid-century and declines thereafter, and the rapid introduction of new and more efficient technologies. Major underlying themes are convergence between regions, capacity building, and increased cultural and social interactions, with a substantial reduction in regional differences in per capita income. The A1 scenario family develops into three groups that describe alternative directions of technological change in the energy system. The three A1 groups are distinguished by their technological emphasis: fossil intensive (A1FI), non-fossil energy sources (A1T), or a balance across all sources (A1B) (where balance is defined as not relying too heavily on one particular energy source, on the assumption that similar improvement rates apply to all energy supply and end use technologies).8 The A1 scenario family develops into three groups that describe alternative directions of technological change, and this is critical for climate. The first one is a fossil-fuel- intensive scenario, that’s A1FI for fossil intensive. Then, there is A1T, a major world effort at high-technology and non-fossil energy sources, and this is not necessarily out of concern for abating climate change. This is a storyline about people who don’t like air pollution that causes health problems, and therefore, they switch to more efficient technologies, but not primarily for climate’s sake, although of course, one could argue it could be. And then there’s A1B, a “balance,” which is an average of the two previous scenarios. It doesn’t rely too heavily on one source (fossil fuels) or the other 10 (high technology). This is considered by many people to be the most likely scenario, but again, it isn’t certain. (Remember the old joke: If you play too much with a crystal ball, soon you’ll be eating glass!) Then, there’s the A2 storyline, which tells of a heterogeneous world that is self- reliant, concerned with preserving localized entities, and so forth. This tends to lead to emissions that are higher than all other scenarios except for the A1FI case. I won’t spend time on it, other than to offer a personal opinion that a scenario (A2) that starts out slowly and then accelerates (unlike A1FI which starts out fast and then slows down in conjunction with learning about lesser emitting technologies) is the least likely to unfold. A2: The A2 storyline and scenario family describes a very heterogeneous world. The underlying theme is self-reliance and preservation of local identities. Fertility patterns across regions converge very slowly, which results in continuously increasing population. Economic development is primarily regionally oriented, and per capita economic growth and technological change are more fragmented and slower than in other storylines.8 Let me spend a moment on the B1 story line: B1: The B1 storyline and scenario family describes a convergent world with the same global population (which peaks in mid-century and declines thereafter) as in the A1 storyline but with rapid change in economic structures toward a service and information economy, with reductions in material intensity and the introduction of clean and resource-efficient technologies. The emphasis is on global solutions to economic, social, and environmental sustainability, including improved equity but without additional climate initiatives.8 This storyline and scenario family is one of a converging world with the same global population as A1, peaking in mid-century and declining thereafter, but with rapid change in economic structures towards service and information economies, so the energy intensity goes way down. Reductions in material intensity occur, meaning that the B1 world finds efficient ways of producing economic output with less material, less energy, cleaner resources, and more efficient technologies. And in practice, it goes beyond that. This is a very egalitarian scenario, and it makes the assumption that there will be global solutions to economic, social, and environmental sustainability issues and improved equity everywhere. So B1 is a rich, happy, sustainable world, whereas the A1FI is a rich, more hierarchical and polluting world. Neither, however, have an explicit climate policy. Then there’s B2. B2: The B2 storyline and scenario family describes a world in which the emphasis is on local solutions to economic, social, and environmental sustainability. It is a world with continuously increasing global population (at a rate lower than in A2), intermediate levels of economic development, and less rapid and more diverse technological change than in the B1 and A1 storylines. Although the scenario is also oriented toward environmental protection and social equity, it focuses on local and regional levels.8 The quote overleaf describes the most important point: subjective probabilities versus “equally sound,” and I’m continuously on the front lines of this battle. Subjective Probabilities versus “Equally Sound” An illustrative scenario was chosen for each of the six scenario groups A1B, A1FI, A1T, A2, B1, and B2. Some IPCC authors consider the scenarios “equally sound”, which offers no guidance on which storylines are more or less likely. A subjective probability assessment of the likelihood of the sets of scenarios would offer policymakers a useful characterization of which scenarios may entail “dangerous” outcomes. 11 The SRES scenarios do not include additional climate initiatives, which means that no scenarios are included that explicitly assume implementation of the United Nations Framework Convention on Climate Change (i.e., policies to prevent “dangerous anthropogenic interference with the climate system”) or the emission targets of the Kyoto Protocol or any next generation emissions mitigation agreements. 8 For example, at an IPCC meeting in Marrakech, some IPCC authors, in fact almost all of those who produced the SRES, defended language in the IPCC to the effect that each of the six scenarios we just discussed (A1B, A1FI, AIT, A2, B1, and B2) were “equally sound.” That’s the phrase that was used. Basically, the IPCC has offered no guidance on which storylines are more or less likely. If you are a politician, and you are an honest politician, which is not always an oxymoron, you would likely ask: “How do I rate the importance of the global warming problem versus housing versus security versus nature reserves versus health research versus clean water?” You need to have some idea not just about what the consequences of global warming could be, but what the odds are of different scenarios actually occurring. What are the probabilities of these events happening? I think it is ridiculous for the IPCC to consider an egalitarian scenario to be as likely as a globalized storyline where greed and wealth dominate. In fact, in every speech I give, I ask the audience who believes that “egalitarian sharing (B1) is more probable than business as usual/personal gain (A1FI).” Not surprisingly, for the thousands of hands I have seen up in the air on this informal decision analytic elicitation, no more than a half dozen have ever ranked the B1 scenario as more probable in their opinions than the A1FI. I am very confident that a proper decision analysis on this would reveal the same cynical belief system from most—in this world, greed trumps equity, to phrase it in a stark dichotomy. Perhaps IPCC members do not want to admit that the scenario they don’t like as much personally is probably more likely to occur than the one they do like. Also, assigning probabilities to hypothetical outcomes is never objective. But, if scientists don’t take on this job, then it will be up to the world’s politicians to guess about what they think the experts think are the likelihoods of these various scenarios. Without probabilities attached, how can decision-makers assess their risk and create policy initiatives? I keep confronting my anti-probability colleagues, who continue to say, “You can’t decide it! It’s the future, and it’s a social future and unknowable,” with the following dilemma: “Then why aren’t we all working on preventing the next 10 kilometer asteroid from colliding with the Earth?” This is an unimaginable catastrophe, which has happened several times in the past 600 million years causing massive death and extinctions. The reason it is not our prime concern is that we already know the odds: about one in 100 million per year. So, when you have odds like that as opposed to climate change, which I think carries at least a one in two probability of nasty events taking place, you have a completely different situation. Odds matter, and the scientific community, in my opinion, is ducking its responsibility when it refuses to take on this job just because it’s: a) not objective, and b) divisive. I don’t see how we can be really helpful to politicians and other policy-makers who are trying to weigh priorities in a risk-management framework without trying to assign odds—even if those odds have low confidences attached. Next, I have included a graphical look at the scenarios (see Figure 2, overleaf), in which the IPCC produced a whole family of curves based on the scenarios. The top left graph shows CO2 emissions, and we can see the lines for the fossil intensive world (A1FI), the B1 world, the A1 advanced technology world, and so on. This also shows production of methane, nitrous oxide, and aerosols. Aerosols are important because most are primarily cooling agents, even though newer measurements suggest 12 that black carbon—soot—from diesel engines and other applications—particularly in India—could have a more complicated effect, at least on regional climates. Given some level of emissions, a model is needed to translate those into concentrations; generally, a biogeochemical model is used for this (see the concentration graphs in the right-hand column of Figure 2). We have to figure out concentrations for every type of emission, and then we have to use a climate model to translate the various concentrations into temperature changes. After that, we can use agricultural, forestry, and ecological models to translate temperature changes into impacts, and then use an economic model to determine the costs of those impacts versus the costs of mitigation or adaptation responses to the climate change. This is what I was talking about when I referred to our very hazy crystal ball, which is a combination of multiple sub-system models given the impressive-sounding name of “integrated assessment modeling.” Figure 2: A1F1, A1T and A1B Emission Scenarios. The different socio-economic assumptions underlying the SRES scenarios result in different levels of future emissions of greenhouse gases and aerosols. These emissions in turn change the concentration of these gases and aerosols in the atmosphere, leading to changed radiative forcing of the climate system. Radiative forcing due to the SRES scenarios results in projected increases in temperature and sea level, which in turn will cause impacts. The SRES scenarios do not include additional climate initiatives and no probabilities of occurrence are assigned. Impacts in turn can affect socio-economic development paths through, for example, adaptation and mitigation. The 13 highlighted boxes along the top of the figure illustrate how the various aspects relate to the integrated assessment framework for considering climate change. (IPCC, 2001: Figure 3-1.)9 Figure 3: Past and Future Variations of the Earth’s Surface Temperature. Variations of the Earth’s surface temperature: years 1000 to 2100. Departures in temperature from the 1990 value in ˚C. Over the period 1000 to 1860, observations are shown of variations in average surface temperature of the Northern Hemisphere (corresponding data from the Southern Hemisphere not available) reconstructed from proxy data (tree rings, corals, ice cores, and historical records). The line shows the 50-year average, and the grey region around the line in the proxy data area shows the 95% confidence limit in the annual data. From the years 1860 to 2000, observations are shown of variations of global and annual averaged surface temperature from the instrumental record. The line shows the decadal average. (Considerable uncertainty accompanies the precise shape of the variations before 1900, but the large warming at the end of the 20th Century above previous levels for many centuries is robust in nearly all studies.) Over the period 2000 to 2100, projections are shown of globally averaged surface temperature for the six illustrative SRES scenarios and IS92a as estimated by a model with average climate sensitivity. The grey region marked “several models all SRES envelope” shows the range of results from the full range of 35 SRES scenarios in addition to those from a range of models with different climate sensitivities. (IPCC 2001: Figure 9-1b.)9 When will climate impacts become “dangerous”? In Figure 3 above, we can see the temperature of the last 1,000 years in the Northern Hemisphere. Each individual year has a gray band of uncertainty around it, so the numbers aren’t exact (and the details are still debated), but when we average over time, we get a very, very slight cooling trend from 1000 A.D. to the mid- to late-1800s. Then, after 1850, there’s a noticeable rise in temperature. The latter is another very well established fact, even if the shape of the curve before the industrial revolution is more uncertain. When people tell you global warming is speculative, they simply don’t know what they’re talking about—or worse, they’re spreading disinformation. The surface of the Earth has warmed up about 0.7 degrees Celsius plus or minus 0.2 since 1860 or so. It’s in line with many environmental phenomena we’ve been seeing, including the widespread melting of mountain and some continental glaciers, rises in sea level from thermal expansion, and now, a very consistent signal of plant and animal migrations in response to the warming. Whether or not the Earth is warming is not the debate any more within the knowledgeable climatological community. The debate is: what fraction of that warming is natural and what fraction of that warming is attributable to us? The IPCC thinks that at least half of it can be attributed to us. But, here’s the key: The right side 14 of this graph represents the future. When we get to 2100, we have a very, very large fan of uncertainty. The very “best” scenario, the B1, is at the bottom, and the very “worst” scenario, the A1FI, is at the top. The vertical bars to the far right show the possible temperature ranges associated with each scenario, as calculated by the IPCC using half a dozen different climate models. Notice the bar for the A1FI scenario? It’s very wide and the tallest. So, each SRES scenario accounts for uncertainties in human behavioral characteristics, but each temperature range bar is the joint probability of the fan of uncertainty from the scenario itself combined with the fan of uncertainty from the climate science—represented by the height of each individual bar. For a long time, it’s been asserted that temperatures would increase between 1.5 and 4.5 degrees Celsius (ºC) for a doubling of CO2. I’ll discuss in a minute why that’s much too restrictive. The gray area at the year 2100 on our graphs gives a range of about 1.5 to 6ºC warming above 1990 levels. That’s the difference between significant change at the lower end of the range and, I would argue, utterly catastrophic change at the high end of the range. But what are the probabilities of the less severe or more catastrophic possibilities both shown in this figure? The IPCC didn’t say, so the political or economic world has to guess what they are, my longstanding complaint.10 Let’s return to the issue of climate sensitivities. In Figure 4 overleaf,11 M1 is the climate sensitivity of model number one; M2 corresponds to model number two, and so on. The height of the bars in the last graph we viewed in Figure 3 basically showed some of the differences in ranges between the six IPCC models and gave what I call on Figure 4 a “well-calibrated” range, the top line on the graphic above—well- calibrated since it takes into account the sensitivity values of several different models. However, every climate scientist realizes there are physical, biological and chemical components left out of all models. Therefore, if we do a decision-analytic survey, and ask climate experts their opinions on how much the global mean temperature would rise if CO2 doubled, as was done by Granger Morgan and David Keith,12 we’ll get a wider range than we would by just running the models, as represented by the middle line of the figure, the “judged” range of uncertainty. However, cognitive psychologists have suggested that people’s estimations are generally over-confident, so the actual range of outcomes is probably even larger than that of the expert survey. I put “full” in quotes on the figure because we can’t say that we’ve explored the full range of possibilities when some of the outcomes aren’t yet even imaginable. This graphic is just a heuristic to remind us that what we calculate is unlikely to contain a full assessment of uncertainties for still very complex issues like the sensitivity of the climate to doubling of CO2. Figure 4: Ranges of Uncertainty. Schematic depiction of the relationship between “well-calibrated” scenarios, the wider range of “judged” uncertainty that might be elicited through decision analytic survey techniques, and the “full” range of uncertainty, which is drawn wider to represent overconfidence in human judgments. M1 to M4 represent scenarios produced by four models (e.g., globally averaged temperature increases from an equilibrium response to doubled CO2 concentrations). This lies within a “full” range of uncertainty that is not fully identified, much less directly 15 quantified by existing theoretical or empirical evidence. (From Schneider & Kuntz-Duriseti, 2002, based on Jones, 2000.)13 Scientists have tried to estimate climate sensitivity by doing probability density functions. Figure 5 is a graph from a group at MIT. As I said, the canonical wisdom has been that the temperature will rise between about 1.5 and 4.5ºC if CO2 doubles. Notice that some of the MIT group’s temperature change numbers are well outside this range. I don’t have time to explain their method, but the results are similar to many other such studies. Figure 5: Climate Sensitivity PDFs. Climate sensitivity probability density functions. (From Forest et al., 2002.)14 Figure 6 overleaf shows another climate sensitivity estimate—from Andronova and Schlesinger.15 They looked at various forcings, and by forcings I mean the number of Watts per square meter that are imposed on the Earth by CO2, methane, aerosols, etc. There’s a lot of uncertainty in these estimates, so Andronova and Schlesinger used data from the entire spectrum of literature on forcings, and then they tuned their models to get the best fit between the observed surface temperature change and the amount of forcing occurring. 16 Figure 6: Probability Density Function and Cumulative Density Function. Probability density function (A) and cumulative density function (C). (From Andronova and Schlesinger, 2001.)15 They produced this probability density function for T2x which is the sensitivity of the climate to a doubling of CO2. The lower graph in this figure shows the cumulative density function (CDF), which I like better because it allows you to easily see percentiles. On the CDF, one can easily read the 10th, 50th, and 90th percentile lines. The 10th percentile line says that there’s a 10% chance that if CO2 doubled, the temperature would eventually warm up about 1.1ºC or less. The 50th percentile line shows there’s about a 50% chance that the average global temperature would warm up 2ºC or less. So far, it’s not so bad, although many think two more degrees would trigger some nasty irreversible effects, including a melting of the Greenland Ice Sheet and bleaching of most coral reefs. Now, let’s look at the 90th percentile. That’s about 6.8ºC of warming, which means there’s a 10% chance that it could be 6.8 or more degrees warmer! 6.8 degrees is about the difference between an ice age and an inter- glacial period, an absolutely catastrophic magnitude of change, in my opinion, particularly if it happened in a century or so given that ice age to interglacial transitions have averaged about 5,000 years to fully complete. Now, let’s put it all together as simply as possible, by looking at the 10th, 50th, and 90th percentile cases for climate sensitivity and two SRES scenarios—the fossil intensive and high technology variants of the rich, but globalized world A1. Let’s start with A1T: 17 Figure 7: Three Temperature Projections for the A1T Scenario. Three temperature projections using three climate sensitivities for the A1T scenario. (From Schneider, 2003.)16,17 On Figure 7 above, I drew a horizontal line at 3.5ºC warming for two reasons. First, 3.5oC was the highest estimate of warming given in the IPCC Second Assessment Report in 1995—well below the 5.8oC maximum value established in the Third Assessment Report in 2001.9 Second, the IPCC Working Group II said that “dangerous” climate change, whatever that means, is much more likely because of nonlinearities in the ecological and economic system, and that these were more likely after “a few degrees” Celsius warming. Nobody knows exactly what “a few degrees” means, so I was conservative and put it at 3.5ºC (although I believe considerable risks are associated with warming well below 3.5oC too—see Schneider and Mastrandrea, 2005.18 So, the solid shaded area on the graph is a very conservative estimate of the “high danger zone.” Now, look at the lowest line (10th percentile). If we’re lucky, the 10th percentile climate sensitivity case will happen. This puts the temperature increase at 1.1ºC for CO2 doubling and about a 1.5oC warming in 2100. We’ve warmed up a little more than 0.7ºC so far, so we’d go up another 0.75 to 1.0ºC. Now, if we get the median (50th percentile) case (the middle line), then we end up warming up another 2-2.5ºC, which is a non-trivial change but is below my very conservative 3½ degree “highly dangerous” line. But if we’re unlucky and the 90th percentile case turns out to be true (the top line), then the increase in warming will be very large. Notice for the A1T scenario that all three lines have mostly stabilized by 2100. This is because in the A1 technology scenario (A1T), it’s assumed that we’ve invented (or invested) our way out of the fossil fuel emissions era by the end of the century. What about A1FI? 18 Figure 8: Three Temperature Projections for the A1FI Scenario. Three temperature projections using three climate sensitivities for the A1FI scenario. (From Schneider, 2003.)16,17 None of the A1FI lines stabilize by 2100; even the lowest temperature change scenario (the 10th percentile) is still growing. It still has a positive slope in 2100. The median line is already into “highly dangerous” territory before the end of the century and still growing, and the 90th percentile possibility—I don’t even want to think about that. So, when we’re talking about risk management, you have to ask: What’s the joint probability of the A1FI scenario combined with the 90th percentile temperature case? I don’t know what it is, but it’s at least in the second decimal point of probability (>1%) and maybe the first (>10%). What rational person or society would take that kind of risk with our life support system? Let’s apply this to a different scenario, a night out to a nice restaurant. If there’s a 10% chance that there is salmonella bacteria in your salmon, are you going to eat it? These are the kinds of questions we must remind people of when they say, “Well, we’re not scientifically sure; let’s not deal with this now until we know more.” When people ask me, “You can’t seriously advocate slowing down the main stay of an industrial civilization—burning fossil fuels—for only a ten percent chance of truly catastrophic outcomes?” I ask them about salmonella. A ten percent chance is an order of magnitude higher than risks for which we spend fortunes on insurance premiums: fire, earthquake, theft, health, etc. We make such decisions without any hope of knowing precise probabilities in our case, and instead choose to hedge in order to reduce our risks. So why is protecting the climate different? Why should environmental scientists be required to provide a 90% objective probability of severe harm when in most other human endeavors with big risks we hedge on a hunch? This is hypocrisy—and it is common. When are we going to resolve these uncertainties? Figures 7 and 8 which I included for climate sensitivity actually represent increased uncertainty as more research has been done. Those results showing greater variation in temperature change have only come out in the last six years, so the 1.5 to 4.5ºC range that has been confidently used by the scientific community for the last 25 years can’t be used so confidently any more since 50% of the values calculated by models now fall outside that range. Part of the reason climate sensitivity is so tough to model is because of the influence of cooling aerosols, which make it difficult to know exactly how sensitive the Earth has been to GHG forcing based on any empirical test. How long will it take to substantially narrow the climate sensitivity range? In my view, decades, though my 19 personal view is that the outliers, below 1oC or above 5oC, can be seen as pretty unlikely. But, I can’t rule out a 5% chance (subjective, of course) of each occurring. Think about salmonella before you think 5% is a minor risk for something as monumentally dangerous as more than 5oC global warming in a century! Observed changes. We have some clear signs of global warming. Figure 9 (overleaf) shows the decrease in ice area on Mt. Kilimanjaro. The Snows of Kilimanjaro was a good story—but if it were just Kilimanjaro that was being affected, that would have little influence on the world at large. It would, of course, represent a significant impact on the people who depend on the streams flowing from Kilimanjaro or the tourists who come to climb and see the wonder. The neighbors don’t want to have floods during the melting season, nor do they want to be left without any water thereafter. It can also force birds and other creatures to relocate to new habitats—if they can find any. From the perspective of a global cost/benefit analysis, the economic product associated with this locally catastrophic possibility would be hardly noticed. But does that justify ignoring it? This is a deeply normative (ethical), not economic, issue. Figure 10 (overleaf) shows the distribution of the Baltimore oriole, with the current distribution marked on the left U.S. map. Baltimore is within the range of the oriole, of course, until global warming takes its toll. Al Gore was very interested in the second map on the right, produced by Jeff Price,19 which shows the possible future range of the Baltimore oriole based on its physiology, using a CO2 doubling scenario. Remember, CO2 doubling is the best scenario in SRES. Everything else projects more CO2 than that, and considering there may be no more Baltimore orioles in Baltimore for a “mere” doubling of CO2, this may serve as a good hint about future impacts. 20 Figure 9: Map of the Retreat of Mt. Kilimanjaro’s Ice Cap Since 1912. This map by Ohio State University researcher, Lonnie Thompson,20 shows the retreat of Mt. Kilimanjaro’s ice cap since 1912. During this period, more than 80 percent of the mountain’s glaciers were lost. All ice will probably be lost on the mountaintop within 15 years. Figure 10: Comparison of Current US Distribution of the Baltimore Oriole and Possible Future Distribution with a Doubling of CO2. Current Baltimore Oriole Distribution Projected Distribution with a Doubling of CO2 The current distribution of the Baltimore oriole could change significantly with a doubling of CO2, especially if considered in concert with already well-established stresses such as habitat conversion, pollution, and invasive species. (From Price & Root, 2002.)19 The policy question is: Who cares? How much is it worth to have Baltimore orioles in the city of their namesake? In other words, how do you value nature? Does nature have intrinsic value because it’s there, because we’ve had 2 billion years of the co- evolution of climate and life that brought us the distribution of species we now have? Does one species have the right to want to double its numbers and quadruple its income as fast as possible—even if it’s at the expense of the very existence of half of the other species? Do we really have to wait two generations before we have cheaper 21 and more benign technologies and organizations to prevent this? To me, this is not primarily an economic question of crop yield changes or ecosystem services. This is an ethics question about what we value—including judgments about what changes are “acceptable” and which are “dangerous.”16,21 Should policies be implemented in the face of such great uncertainties? Space doesn’t permit me to argue very deeply that cost-benefit methods do not work well for the climate policy problem because of the inherent uncertainties in every factor from scenarios to climate sensitivity to estimates of climatic damage and adaptive capacity. In that context no single “optimum” policy is remotely meaningful. All you can do responsibly is produce an “optimum” probability distribution based on probabilistic inputs, as no single optimum policy is meaningful given the uncertainties. And it is even more difficult than that to produce a meaningful optimal policy. Cost benefit analyses aggregate over many regions and sectors in a common numeraire or metric— usually dollars. How does one weigh such diverse metrics as market system losses, species driven to extinction, human lives lost or inequitable distribution of impacts?22,23 To aggregate these involves normative judgments of the relative importance of each category. Clearly there is no calculus that can do that—it is a political value judgment that must be negotiated across stakeholders and nations. The proper role of economic analysis is not to attempt to perform some complex cost/benefit analysis to determine “optimal” levels of mitigation effort—that is a chimera. Rather, once a political decision to cap emissions at some level or to tax emissions at another level is made, then economic methods are essential to cost- effectively try to achieve those caps set by ethical, not primarily by economic, judgments. This problem of framing the policy question as mitigation driven by mostly normative criteria but crafting solutions built on economic cost-effectiveness assessments plagues all international meetings trying to find fair and cost-effective compromise solutions to the climate policy problem. Now, let’s address the question of mitigation costs. Figure 11 is an IPCC graph.9 This figure is very interesting—please do not take the model dependent numbers given literally, but do take the framework seriously. The bars show cumulative carbon emissions from 1990-2100. If 754 gigatons of carbon are emitted from 1990-2100, we end up with a CO2 concentration of 450 parts per million (ppm), which is about 50% more than it was in pre-industrial times. The next bar shows doubling of pre-industrial levels, the next one 2 1/3 times, and at the right edge we’re getting closer to tripling CO2. So, what this shows is what the mitigation cost could be if you rely only on economic models. In this graph, the IPCC assumes a carbon tax is levied and that it induces conservation and alternative technologies. The tax also reduces GDP, so the IPCC calculates what the loss in GDP would be; then, elsewhere in the report, they weigh that against whatever the perceived benefit is of preventing climate change. The mitigation costs’ present value over the 21st century could be as high as about $18 trillion, and the current world GDP is something like $40 trillion. In any case, you’ll hear the U.S. administration and many in industry say: “This is just outrageously expensive. This problem may not even happen. We could easily fall within the not very risky zone, well under the shaded area in Figures 7 and 8. Why should we spend half of the world’s GDP to solve a potentially minor problem?” Of course, what they forget is, this cost represents the present value of spending that would be done over the next 100 years. So now, these very same models that tell you it’s going to cost $10 to $20 trillion to solve the climate change problem project about a 2% per year growth rate in the economy. So, at 2% per year, that’s a GDP-doubling time of about 35 22 years. So, by the end of the century, you’d have about three doublings. That means the global economy will generate something like $320 trillion in 2100 (8 times 40). So, what is $20 trillion relative to $320 trillion? How many years of economic growth would it take to catch up with the no-climate policy case? It would take under a decade and probably only a year or two. Figure 11: Cost to Stabilize CO2 Concentrations. The mitigation costs (1990 US$, present value discounted at 5% per year for the period 1990 to 2100) of stabilizing CO2 concentrations at 450 to 750 ppmv are calculated using three global models, based on different model-dependent baselines. Avoided impacts of climate change are not included. The bars show cumulative future emissions until carbon budget ceiling is reached are reported above the bars in Gt C. (IPCC, 2001: Figure 7-3.)9 Christian Azar and I23 thought about this GDP issue very carefully. We plotted out four scenarios: business-as-usual, stabilizing CO2 at present value (already a 30% increase over pre-industrial concentrations), half a doubling, and a full doubling. Figure 12 shows the associated GDP projections. We did it by assuming that we had a $200/ton, $300/ton, and $400/ton carbon tax on the full doubling, half a doubling, and stabilizing at present scenarios, respectively (and no tax on business-as-usual). “It’s unimaginable to have hundreds of dollars per ton in carbon taxes!” I can hear two-thirds of the members of Congress and most fossil- fuel producing or consuming businesses cry. This would be politically impossible in any country in the world now, but if you actually plot this out, what you find is because of the assumed growth rate in the economy, you end up 500% richer per capita (versus 1990) in 2102 instead of 2100, even if we were to spend tens of trillions of dollars to solve the global warming problem over the next 100 years. So, all the “astronomical costs” rhetoric in the political debate is not accurate. The costs are actually fairly trivial when considered in the context of a world that is expected to be 5 times richer per capita by 2100. Consumption goes up much faster than mitigation costs is what conventional economic models say. 23 Figure 12: Comparing the Cost of Stabilisation. Global income trajectories under Business as Usual and in the case of stabilising the atmosphere at 350 ppm, 450 and 550 ppm. Observe that we have assumed rather pessimistic estimates of the cost of atmospheric stabilisation (average costs to the economy assumed here are $200/tC for 550 ppm target, $300/tC for 450 ppm and $400/tC for 350 ppm) and that the environmental benefits (in terms of climate change and reduction of local air pollution) of meeting various stabilization targets have not been included. (From Azar & Schneider, 2002.)24 In the summer of 2005, George W. Bush went to the Gleneagles economic summit and declared that had the US signed the Kyoto Protocol it would have been devastating to the economy. He said this, despite the fact that most economic models suggested that Kyoto with trading would only be equivalent to a hundred or so dollars a ton of carbon “shadow price,” and would delay by only a few months achieving a 25% increase in personal income a decade or so from now. The ultimate irony is that although Bush claimed that outrageous costs were a justification for the US pulling out of Kyoto which would cripple the economy, the gasoline price increases that we have seen in the US—a $1.50 increase, on average—would require a carbon tax on the order of a $250 per ton of carbon, and a $1.50 increase, while not trivial, does not seem to have crippled our economy at all. Indeed, this major increase in gasoline price primarily resulted in windfall profits for the oil industry, but was hardly noticeable in the expansion performance of the economy. One wonders how such wildly inaccurate statements are allowed to go unchallenged in the press or political arena. The climate debate needs to be reframed away from absolute costs—or benefits—into relative delay times to achieve specific caps or to avoid crossing any agreed specific “dangerous” climate change thresholds. Seen this way, the uncertainties hardly matter—it is so “cheap” to fix the problem relative to economic growth projections, that there’s no excuse for risking dangerous, irreversible, and destabilizing climate impacts (except for protecting the near-term interests that currently profit from the old ways of dumping in the atmosphere as if it were an unpriced sewer). The latter is not free market economics, but a subsidy to those doing external damages to nature and society—it is well established that no free market works when the prices don’t reflect all the real costs. It takes policies and measures to enforce the need to bring such external costs inside the cost-benefit calculus of those doing the emitting. Let me turn to a related issue: Type 1 versus Type 2 errors. Those of you who study economics are aware of this. A given forecast could be wrong, and it might be costly to hedge against the outcome it predicts. Therefore, given the uncertainties, let’s not risk wasting our current resources on some uncertain worry. Moreover, those who predict a problem that doesn’t end up being very serious fear being blamed for wasting society’s resources. They fear making the “Type 1 error.” 24 Table 1: Type 1 Versus Type 2 Errors and Their Consequences. Decision Forecast proves false Forecast proves true Accept forecast—policy response follows Type 1 error Correct decision Reject or ignore forecast—no policy response Correct decision Type 2 Error Those who fear committing Type 2 errors might say, on the other hand, “Let’s accept the forecast that there’s a 50% chance of substantial damage from climate change. We’ll respond by investing in alternative energy systems, transitioning away from emitting industries, investing in adaptation measures and redistributing resources.” If the forecast proves false, as the Wall Street Journal editors insist it will, then we would have committed a Type 1 error because we made an unnecessary investment hedging on an uncertain outcome that didn’t occur. But what if the forecast proves true? Then we made the correct decision because our anticipatory investments lessened our suffering when climate change actually occurred, as predicted. We then avoided the Type 2 error. Suppose a decision-maker says, “There’s just too much uncertainty and political opposition surrounding the climate change issue, so let’s reject or ignore the forecast and not formulate a policy response.” If dangerous climate change does not occur or does not inflict major damages, then we were right, or maybe just lucky. We got away with it. Smart move. But what happens if the forecast proves true (or even was too optimistic) when we bet on it being wrong? Then we’d suffer a Type 2 error. Many sociologists and political scientists have studied this. They have found that whereas scientists are typically more worried about making Type 1 errors, policymakers are usually more concerned about committing Type 2 errors and often prefer to hedge against potentially damaging events rather than do nothing and later suffer the consequences. Think of the concept of auto insurance: People pay for auto insurance, usually in the hopes they’ll never have to use it, but if they ever do get in an accident, they’re covered. Some governments tend to work in the same way, though not in all aspects of decision making. The Bush administration has provided examples of hedging strategies in the face of uncertain forecasts, the most obvious being the war with Iraq. The war was sold as a precautionary attack on Iraq to make sure it did not have and was not building weapons of mass destruction. Bush did not want to risk having a rogue nation turn into a threatening power, endangering world security, or so it was claimed. Bush seems to have feared making Type 2 errors much more than Type 1 errors when it comes to some putative evidence of dangerous weapons in the hands of “rogue” nations than when it comes to avoiding the potential damages from dangerous climate change. In contrast to the extreme precaution invoked by the Bush Administration and the military actions taken to pre-empt putative risks, the U.S. has not endorsed mandatory climate change policy; it opted out of the Kyoto Protocol in March 2001 and has since only enacted weak voluntary domestic emissions standards and small technology development investments. Like some economists and others who do not see climate change as a serious problem, ostensibly because of large uncertainties, which they say makes policy-making premature—no precautionary principle is invoked here. Moreover, the Bush Administration focuses primarily on the costs of mitigation of GHGs and the harm it would do to the economy in terms of goods traded in markets, looking only at the aspects that can easily be quantified in monetary terms. I do give him credit for being one of the few leaders who has actually 25 admitted that he opposes strict climate policy because he believes it will hurt favored industries or certain key sectors of the economy (e.g., coal producers and inefficient vehicle—SUV—manufacturers). His lack of concern for the distributional or biodiversity risks is not camouflaged. His administration just thinks it can get away with that value system politically—and, so far, their gamble to ignore the public’s concern about the environment has paid off. How much longer that strategy will work remains to be seen, particularly in the wake of the loss of New Orleans to Hurricane Katrina. Markets are only part of the story. But, what Bush fails to acknowledge is that goods traded in markets are not the only entities that people and nations consider valuable—there are many non-market amenities as well. When considering the effects of global warming, I have argued in various papers and presentations, that one must look at what I call the “five numeraires” to understand the full range of consequences the world will experience due to our action or inaction on climate change. They are: market impacts, human lives lost, biodiversity loss, distributional impacts, and quality of life losses per ton of carbon (C) emitted. Whereas it’s relatively easy to put a dollar value on a market impact—like increased or decreased crop yields—how do you quantify the loss of a heritage site due to rising sea levels or the extinction of various species due to habitat loss and climate change? I would argue that deciding against climate change policy for the sole numeraire of protecting favored domestic industry (part of the “market impacts” numeraire) will end up being much more costly than considering all five numeraires and implementing climate change policies accordingly. Table 2: Five Numeraires for Judging the Significance of Climate Change Impacts. Vulnerability to climate change Numeraire Market impact $ per ton C emitted Human lives lost Persons per ton C Biodiversity loss Species per ton C Distributional impacts Income redistribution per ton C Quality of life Loss of heritage sites; forced migration; disturbed cultural amenities; etc., per ton C Note: Multiple metrics for the valuation of climatic impacts are suggested. Typically in economic cost-benefit calculations, only the first numeraire—market sector elements—is included. Different individuals, cultures, and governments might have very different weights on these five—or other—numeraires, and thus it is suggested that analysis of climatic impacts be first disaggregated into such dimensions and that any re-aggregation provide a traceable account of the aggregation process so that decision makers can apply their own valuations to various components of analysis. (From Schneider, Kuntz-Duriseti, & Azar 2000.)23 In summary, experts should answer three questions for citizens: 1) What can happen? 2) What are the odds? and 3) How do we know? Citizens and/or policymakers should in turn take that information and use it to make value judgments on how to take risks and decide who pays for what. Policymakers, typically influenced by stakeholder interests, must also ensure that scientific assessments and consequent policy decisions are not biased by industry or environmental influences alone but rather consider a wide range of interests and opinions. Even in most optimistic scenarios, CO2 will stabilize at a much higher concentration than it has reached today, and temperature will rise accordingly.24 It will take even longer for sea level rise from thermal expansion and the melting of polar ice to occur, but what is most problematic is that how we handle our emissions now and in the next 26 five decades preconditions the sustainability of the next millennium, as large ice sheet melting or species driven to extinction are effectively irreversible losses, even if they take centuries to fully play out. That is an awesome responsibility for the next few generations to bear, and to face it with denial and favoritism to limited interests and selective analysis that ignores all factors other than what is measurable in market transactions, discounted at high rates that favor benefits in the near term to risks of unsustainability in the long term, will not leave a proud legacy for such a generation of decision-makers. A genuine dialog for examining risks based on sound science, multiple metrics, and a broad array of interests, not just elliptical pronouncements from narrow interests and political ideologists who misrepresent the mainstream science and economics of climate change, is long overdue. We owe the people, plants and animals of the future nothing less than an honest and open debate, in which the long-term interests are not discounted away by the convenient calculus of high return on investment and aggregation of all impacted factors into monetary metrics that afford vastly disproportionate weight to those holding the reins of wealth. Climate change, in the view of this observer, is simply much more of an ethics issue than an economics problem. REFERENCES 1. Funtowicz, S.O. & Ravetz, J.R. (1993) Three types of risk assessment and the emergence of post-normal science, in Krimsky, S. & Golden, D. eds. Social Theories of Risk. Greenwood, Westport, CT: 251-273. 2. Kuhn, T. (1962) The Structure of Scientific Revolutions. University of Chicago Press, Chicago. 3. United Nations Framework Convention on Climate Change (1992) United Nations Framework Convention on Climate Change: Convention Text. UNFCCC, Bonn. Available at: http://unfccc.int/essential_background/convention/background/items/2853.php 4. Goulder, L.H. & Schneider, S.H. (1999) Induced technological change and the attractiveness of CO2 abatement policies. Resource and Energy Economics 21: 211-252. 5. Kaya, Y. (1990) Impact of carbon dioxide emission control on GNP growth: Interpretation of proposed scenarios. Paper presented to the IPCC Energy and Industry Subgroup, Response Strategies Working Group, Paris, (mimeo). 6. Yang, C. & Schneider, S.H. (1998) Global carbon dioxide emissions scenarios: Sensitivity to social and technological factors in three regions. Mitigation and Adaptation Strategies for Global Change 2: 373-404. 7. Ehrlich, P.R. & Holdren, J.P. (1971) Impact of population growth. Science 171: 1212-1217. 8. IPCC (2000) Emissions Scenario: A Special Report of Working Group III of the Intergovernmental Panel on Climate Change. Nakicenovic, N. & Swart, R. eds. Cambridge University Press, Cambridge, UK. 9. IPCC (2001) Synthesis Report 2001—Contribution of Working Groups I, II, and III to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Watson, R.T. & the Core Writing Team eds. Cambridge University Press, Cambridge, UK. 10. Schneider, S.H. (2001) What is “dangerous” climate change? Nature 411: 17-19. 11. Schneider, S.H. & Kuntz-Duriseti, K. (2002) Uncertainty and climate change policy, Chapter 2 in Schneider, S.H., Rosencranz, A. & Niles, J.O. eds. Climate Change Policy: A Survey. Island Press, Washington D.C.: 53-88. 12. Morgan, M.G. & Keith, D.W. (1995) Subjective judgments by climate experts. Environmental Science and Technology 29: A468-A476. 13. Jones, R.N. (2000) Managing uncertainties in climate change projections: Issues for impact assessment. Climatic Change 45(3-4): 403–419. 14. Forest, C.E., Stone, P.H., Sokolov, A.P., Allen, M.R. & Webster, M.D. (2002) Quantifying uncertainties in climate system properties with the use of recent climate observations. Science 295(5552): 113-117. 15. Andronova, N.G. & Schlesinger, M.E. (2001) Objective estimation of the probability density function for climate sensitivity. Journal of Geophysical Research 106: 22605-22612. 16. Schneider, S.H. (2003) Abrupt non-linear climate change, irreversibility and surprise, in Estimating the Benefits of Climate Policies, Washington, D.C.: OECD. Available online at: http://www.oecd.org/dataoecd/7/14/2483223.pdf 17. Schneider, S.H. & Lane, J. (2006) An overview of dangerous climate change, in Schellnhuber, H. J., Cramer, W., Nakicenovic, N., Wigley, T. and Yohe, G eds. Avoiding Dangerous Climate Change. Cambridge University Press, Cambridge, U.K. 27 18. Schneider, S.H. & Mastrandrea, M.D. (2005) Probabilistic assessment of “dangerous” climate change and emissions pathways. Proceedings of the National Academy of Sciences 102: 15728-15735. 19. Price, J.T. & Root, T.L (2002) No Orioles in Baltimore? Climate change and neotropical migrants. Bird Conservation 17: 12. 20. Thompson, L. Map of total area of ice on Kilimanjaro (1912, 1953, 1976, 1989, 2000). Published online at: http://researchnews.osu.edu/archive/kilicoresmap.htm. 21. Lane, J., Sagar, A. & Schneider, S. (2005) Equity in climate change. Tiempo 55: 9-14. 22. Schneider, S. H. & Lane, J. (2006) Dangers and thresholds in climate change and the implications for justice, in Adger, N. ed. Politics, Science, and Law in Justice Debates. MIT Press, Cambridge, MA. 23. Schneider, S.H., Kuntz-Duriseti, K., & Azar, C. (2000) Costing non-linearities, surprises and irreversible events. Pacific and Asian Journal of Energy 10(1): 81-106. 24. Azar, C. & Schneider, S.H. (2002) Are the economic costs of stabilizing the atmosphere prohibitive? Ecological Economics 42: 73-80.