key: cord-0035134-cba8gycz authors: van Kooten, G. Cornelis title: Economic Assessment of the Damages Caused by Global Warming date: 2012-08-06 journal: Climate Change, Climate Science and Economics DOI: 10.1007/978-94-007-4988-7_7 sha: 0b32c05ed07c56eb6f54ea92a696c63667f87dab doc_id: 35134 cord_uid: cba8gycz Damages avoided – the principal benefit of mitigating climate change – are investigated in this chapter, particularly the potential adverse impacts on the primary sectors, biodiversity and human health. A review of studies indicates that climate change is unlikely to have much impact on agriculture and forestry; projected climate change will increase productivity in some regions while reducing it in others, leading to a redistribution of land rents with little impact on overall output. When CO(2) fertilization is taken into account, there might even be an overall increase in primary sector productivity that results in more undisturbed land, thus protecting biodiversity. Other findings in this chapter also run counter to current shibboleths: The biggest threat to polar bears is hunting, not climate change; current trends in Arctic ice extent are not without historical precedent; sea level rise is not an imminent threat; extreme weather events are not increasing; malaria is not only a tropical disease; and human health is a function of income, not climate, with bottom-up models using UN data predicting that death rates from almost all causes will be lower with projected global warming than without it. Meanwhile, integrated assessment models (IAMs) simply assume damages are an arbitrary function of temperature; upon balancing discounted costs and benefits, IAMs can be used to find an optimal path (usually of a carbon tax or emissions cap) for mitigating climate change. It is shown that different assumptions regarding damages, the discount rate, and/or the probability of catastrophic damage can be used to justify completely different policies for addressing global warming. Therefore, a carbon tax that is contingent on the temperature in the troposphere above the tropics – where the earliest indication of global warming is predicted to occur – is considered to be the preferred policy strategy as it should appeal to global warming proponents and skeptics alike. Finally, the Kaya identity is used to demonstrate the policy dilemma that decision makers face in reducing CO(2) emissions. action to forestall climate change, the UNEP is telling us that humans are masters of their own fate -that we can control the weather! This is an arrogant claim to say the least, particularly given that weather events are considered by most to be uncontrollable and unpredictable, and that severe weather events ('acts of God') have frequently altered the course of human history (Durschmied 2000 ) . In this chapter, we want to provide some perspective regarding the possible damages from global warming. This is important given that the expected damages from global warming constitute the bene fi ts (damages avoided) from taking action to mitigate greenhouse gas emissions. There are many problems and pitfalls associated with attempts to calculate the damages from climate change. First off, it is necessary to separate damages attributable to the anthropogenic component of global warming and the global warming that is due to natural factors . If 90% of global warming is natural, then damages are essentially unavoidable and the bene fi ts of reducing human emissions of greenhouse gases, of CO 2 , are going to be small. However, if 90% or more of global warming can be attributed to human causes, the bene fi ts of avoiding damages will be much higher. The problem is that, as shown in earlier chapters, the contribution of natural versus human factors to global warming remains unknown -a source of speculation. Given scienti fi c uncertainties, it is not clear whether a particular future event, such as a drought , an unusual storm or an early spring (which negatively affects tree growth in boreal climes, for example), can be attributed to climate change of anthropogenic origin, or whether it is simply a natural occurrence well within the usual vagaries of weather patterns. It may well be a bit of both, and sorting that out is nigh impossible. The relevant question is: What would the subsequent weather-related damage of an event be if action is taken to prevent global climate change versus what is would be without such action? This is the 'with-without' principle of economic evaluation, and must be the principle that guides any discussion of damages. It is another way of saying that opportunity cost must be taken into account. Even if we know that human activities are primarily responsible for climate change, it is necessary to determine how much temperatures will rise and what effect this will have on such things as sea level rise , increased weather events (more frequent droughts, hurricanes, tornados, etc.), the impact on biodiversity and ecosystems, the impact on disease, and so on. The scienti fi c uncertainties are enormous, but once the impacts are known, it is necessary to estimate the costs (or bene fi ts) of such changes and balance these damages against the costs of mitigating CO 2 emissions. It is quite possible that there is a socially optimal level of atmospheric CO 2 that is much higher than the current level, a point where the marginal bene fi ts of further reducing CO 2 emissions equal the marginal costs of doing so. Given wicked uncertainty, could we ever fi nd such an optimal point? Finally, there is the question of tipping points, which has become a cause de celebre among certain economists, particularly the UK's Sir Nicholas Stern ( 2007 ) and Harvard University's Martin Weitzman ( 2009a, b, c ) . One version of the story is that global warming will cause the boreal tundra to melt, thereby releasing vast amounts of the potent greenhouse gas methane (CH 4 ). Once that happens, it is argued, run away global warming will take place. Therefore, based on the 'precautionary principle ,' it is necessary to take drastic action to control human activities that release greenhouse gases into the atmosphere. It is important to note that arguments based on the precautionary principle (Chap. 6 ) cut both ways, as William Anderson ( 2010 ) , a philosopher from Harvard University, points out in a commentary on climategate . If drastic action is taken to curb human emissions, this could, for example, lead to political instability that results in a cataclysmic global con fl ict that must be avoided at all costs; therefore, we should not take drastic action to curb emissions. Clearly, estimating the damages avoided by mitigating climate change is a much more dif fi cult and uncertain prospect than estimating costs, even though ascertaining costs is a dif fi cult enough task. Since costs are related to what is happening in the economy today, they are supposedly easier to get a handle on. Yet, as will be seen in Chap. 8 , where we look at the costs of implementing legislation proposed in the U.S. Congress to reduce CO 2 emissions, such cost estimates are controversial. 3 Determining damages avoided is a signi fi cantly more dif fi cult task, and not one that policymakers are willing to take on, with the exception perhaps of Stern ( 2007 ) . However, even if speci fi c estimates of damages are unavailable or unstated, political decisions taken to address climate change give some indication of the costs that politicians are willing to incur, and thereby some notion of the damages they think might be avoided (even if these relate only to their chances of getting reelected at some future date). In this chapter, we examine two approaches to damage estimation, commonly known as 'bottom up' and 'top down.' Bottom-up approaches determine the impacts of global warming at the sector, region or country level. For example, a bottom-up study might focus on the effect that climate change has on crop yields in the Canadian prairie provinces, or the potential impact of sea-level rise on New York City, or the effect of increased drought on biodiversity in the U.S. Paci fi c Northwest, or the impact of reduced precipitation on the Amazon rainforest. Such studies are somewhat speculative in the sense that climate change is about global trends in average temperatures and precipitation and not speci fi c details. But the main drawback of bottom-up estimates of climate-change damages is that they do not take into account adaptation and impacts elsewhere. Thus, while tourists may no longer visit one region, another region may see an increase in tourism. A reduction in crop yields in one region could be more than compensated for by increased yields in another region. Prices change which causes estimates of damage to change, but bottom-up approaches do not and cannot take price changes into account. Top-down models are less detailed and, in that sense, less accurate in their estimates of damages. Given that climate change is all about global mean temperatures and precipitation, and trends in these averages, crude estimates and indicators of direction and potential magnitudes of damages are all that can realistically be expected. This is what top-down models provide. Integrated assessment models (IAMs) are a particular type of top-down model and they offer perhaps the best means for estimating global damages. IAMs are generally mathematical programming models that seek to optimize an objective function subject to static and dynamic constraints. The objective is usually to maximize the discounted sum of economic surpluses (generally only producer and consumer surplus es) over time, with constraints on technology, available resources, et cetera. We begin in this chapter by considering estimates of climate-change damages related to (1) the primary sectors (agriculture and forestry), (2) ecosystems and biodiversity , (3) sea level rise , (4) increased severe weather incidents , (5) health effects , and (6) other impacts (e.g., on tourism). Sea level rise, health and biodiversity in particular are considered to be particularly vulnerable to global warming and thus to lead to large estimated costs if climate change occurs. In agriculture and forestry, sophisticated modeling and statistical methods have been used to determine damages from climate change, but even such estimates remain highly speculative. Estimates of damages in other sectors are crude and unreliable at best. Most of the damage estimates provided here are derived from bottom-up studies, but not all. In the second half of the chapter, we focus on top-down modeling efforts. These are likely to be more controversial and, from an economist's point of view, a more exciting line of inquiry. Here the research centers less on the actual magnitude of the damage estimates, although that is of importance, but more on choice of an appropriate discount rate (which has a huge effect on the magnitude of damages when brought to a single year) and the potential for catastrophe (in fi nite damages). When evaluating various sector-or regional-level estimates of damages, it is well to recall the measurement issues identi fi ed in the previous chapter. In many instances, estimates of damages constitute little more than an income transfer from one region to another or from one group to another, and are not true costs of climate change. One should also note that estimates of damages likely have little relation to a particular policy, such as the Kyoto Protocol , which actually did little to prevent global warming. Nor do all authors come to the conclusion that climate change will only lead to large reductions in overall global well being, although there is agreement that some regions will lose while others gain. In this regard, it is important to note that, even if climate change results in bene fi ts (albeit with some probability), a riskaverse society may still be willing to pay some amount to avoid climate change. Not surprisingly, sector-, regional-and country-level estimates of climate-change damages come from every quadrant, and are certainly not the purview only of economists. In some cases, damages are estimated using the principles espoused in the previous chapter, estimates based on solid economic science using economic measures of surplus. In other cases, estimates confuse true economic costs and bene fi ts with (discussed in Chap. 6 ) to develop an econometric (statistical) model for estimating damages. The second approach, which was used by the researchers cited above, employs numerical solutions to a constrained optimization model. Each is discussed in turn. Ricardian land-use models have become popular for determining the costs or bene fi ts from climate change. Ricardian analysis simply assumes that landowners will use land in its best alternative, thereby maximizing the available rent. Observed land values re fl ect the fact that land rents diverge as a result of different growing conditions, soil characteristics, nearness to shipping points, and so on. Since producers face nearly the same output price, differences in land prices are the result of differences in the Ricardian or differential rents, which, in turn, are attributable to the various factors affecting production. Thus, marginal agricultural land that is used for extensive grazing has much lower rent than land that is intensively cropped. The idea is illustrated with the aid of Fig. 7 .1 , where three land uses are considered and the factor determining differential or Ricardian rent is precipitation. When annual precipitation is low, the land can only support livestock, but, as precipitation increases, the rangeland yields increasingly higher rents because more forage can be produced. When annual precipitation increases beyond P 1 (associated with point A), crop production is the alternative land use that provides the highest rents. To the right of A, landowners will cultivate land and grow crops to achieve higher net returns to the land. Rents to crop production are even higher on land parcels that experience more rainfall. If too much rain falls, however, the land yields a greater rent in forestry -beyond P 2 (point B) rents in crop production are lower than those when the same land is in forestry. Points A and B represent intensive margins , precipitation thresholds where land use changes, while a and b represent extensive margins for land uses crop production and forestry, respectively, because rent to those land uses falls below zero if precipitation is below those thresholds. The Ricardian approach assumes that a landowner experiences average annual precipitation of P 0 before climate change occurs. If precipitation under the altered climate falls below P 1 , the landowner will stop crop production and rely on forage for livestock; if precipitation increases beyond P 2 , she will encourage a forest to establish on the land. It is assumed that, for annual precipitation levels between P 1 and P 2 , the owner will adjust use of other inputs to maximize the rent accruing to the land. A hedonic model estimates farmland values as a function of climate variables, such as growing degree days (number of days during the growing season that temperature exceeds 5°C) and precipitation, and various control variables (soil quality, latitude, nearness to urban area or population density, nearby open spaces, presence of irrigation, etc.). The climate and control variables constitute the explanatory variables or regressors. Once the model parameters have been estimated for a sample of farms for which actual sales data are available, the results are fi rst used to predict the farmland values across the entire study region or country. Then the climate variables are changed to re fl ect the projected change in climate, and the model parameters are again used to predict the farmland values for the entire region/country. (The control variables and the estimated model parameters remain the same in the current and changed climate states of the world.) The model implicitly assumes that, if landowners face different climate conditions, they will choose the agricultural land use (crop and technique) that maximizes their net returns. The differences between farmland values in the current climate state and the projected future climate regime constitute the costs (if overall values fall) or bene fi ts (if overall farmland values rise) of climate change. The Ricardian approach is considered by economists to be the most appropriate method for estimating the potential impact of climate on agriculture and even forestry, if forest use of land is taken into account. However, forestlands are often ignored because there is little information on private forestland prices (much is owned by institutional investors) or the forestland is owned by the government and no price data are available. Further, there is no reason to suppose that the estimated parameters will continue to hold under a changed climate regime, which might be the case if growing conditions under a future climate regime are outside observed values (which increases uncertainty of predicted values); it is also dif fi cult to use hedonic models estimated for a current period to project how the same land might be used some 50-100 years later. Ricardian models do not take into account technological and economic changes that might occur, nor can they be expected to do so. But they also fail to take into account the fertilizer impact of CO 2 , which is discussed below. Despite these fl aws, the Ricardian method is one of the few statistical approaches that can be used to determine potential damages from global warming, and it is solidly rooted in economic theory. Using econometric analysis of land use value s, Mendelsohn et al. ( 1994 ) projected a small increase in U.S. GDP as a result of global warming. On the other hand, Schlenker et al. ( 2006 ) found that, if agricultural regions were separated into irrigated and dryland areas, the conclusions from econometric modeling would be reversed. Climate change would unambiguously impose net costs upon agriculture in dryland regions of the United States, although some dryland areas in the northern states would gain. It was also believed that "climate change will impose a net economic cost on agriculture in irrigated counties, whether in the form of higher costs for replacement water supply or lower pro fi ts due to reduced water supply." Reinsborough ( 2003 ) also used a Ricardian land rent model to analyze the potential impact of global warming, but then for Canada. She found that Canada would bene fi t marginally as a result of climate change -some $1.5 million per year or less. In sharp contrast, Weber and Hauer ( 2003 ) fi nd that Canadian agricultural landowners could gain substantially as a result of climate change. Their Ricardian rent model employed a much fi ner grid and greater intuition regarding agricultural operations than did Reinsborough. They projected average gains in land values of more than 50% in the short term (to 2040) and upwards of 75% or more in the longer term (to 2060). Canada will clearly bene fi t from global warming, as most likely would Russia. A second class of models uses economic theory to develop a mathematical representation of land-use allocation decisions. An economic objective function is speci fi ed and then optimized subject to various economic, social, climate and technical constraints, with the latter two representing the production technology. The objective function might constitute net returns to landowners, the utility (wellbeing) of the citizens in the study region, or, most often, the sum of producers' and consumers' surpluses. The choice of an objective function depends on the purpose of the analysis, the size of the study (whether country, region or worldwide level) and the number of sectors included. The numbers and types of constraints also depend on the size of the model and its purpose (multi-region models with as many as 100,000 or even more constraints are not unusual), but somewhere (usually in the production constraints) climate factors are a driver. Models are calibrated to the current land uses and other conditions (e.g., trade fl ows) using a method such as positive mathematical programming, which employs economic theory to fi nd calibrating cost functions (Howitt 1995 (Howitt , 2005 . 5 Models are solved numerically using a software environment such as GAMS (McCarl et al. 2007 ) . To determine the costs (or bene fi ts) associated with climate change, the calibrated model is solved with the current climate conditions, and subsequently re-solved with the projected future climate conditions. Differences between the base-case objective function and the future scenario (or counter factual) constitute an estimate of the cost or bene fi t of climate change. Some numerical constrained optimization models are static, while others are dynamic in the sense that current activities (the land uses chosen today) affect the state of nature in the next period (future possibilities), and thus the choices one can make in the future. This is the idea behind integrated assessment models . Most models of land use in agriculture and forestry are static, although the Forest and Agricultural Sector Optimization Model (FASOM) is an exception (Adams et al. 1996 ) . It optimizes the discounted sum of producers' and consumers' surpluses across forestry and agriculture, determines optimal harvest times of commercial timber, permits reallocation of land between the agricultural and forest sectors over time, and takes into account carbon uptake and release. To keep things manageable, it employs a 10-year time step. The impact of climate change is not modeled, per se, as FASOM is primarily used for policy to determine how carbon penalties and subsidies might affect the allocation of land use within and between the two primary sectors . The primary limitation of FASOM is that it only applies to the forestry and agricultural sectors of the United States, ignoring climate impacts in other countries that may affect U.S. prices. One variant of static numerical optimization models is the computable general equilibrium model (CGE ). A CGE model maximizes a utility or social welfare function subject to equality constraints. Each sector in an economy is somehow represented in the constraint set (even if subsumed within a larger sector) and sometimes in the objective function. The extent to which sector detail is modeled depends on the question to be addressed (purpose of the study) and the extent to which detailed macroeconomic level data are available. The best known work employing CGE models in agriculture has been done at the Economic Research Service of the U.S. Department of Agriculture (Darwin et al. 1995 ; Schimmelpfenning et al. 1996 ) . Upon comparing econometric results with those from mathematical programming models, we fi nd that the results of Weber and Hauer, as well as those of Schlenker et al. ( 2006 ) for the northern United States, are in line with those reported by Darwin et al. ( 1995 ) for Canada. Darwin et al. used a land-use model linked to a computable general equilibrium model to estimate the global welfare impacts of climate change as it affects output in the primary sectors . They found that, if landowners were able to adapt their land uses to maximize net returns (as assumed in the Ricardian analyses), global GDP would increase by 0.2-1.2% depending on the particular climate model's projections employed. The majority of studies of damages to the agricultural and forestry sectors are for the United States and Canada. The general conclusion is that the U.S. agricultural sector will likely be harmed by climate change but damages may be minor compared to the size of the sector, while Canada's sector will bene fi t overall (although some regions could be harmed). Clearly, while future research might improve the methods of analysis, scienti fi c and economic uncertainty will make it dif fi cult to obtain more than ballpark estimates of the potential damages from global warming to the agricultural and forestry sectors -and even estimates of gains cannot be ruled out entirely. For example, in a study of the impacts of global warming on individual countries, William Cline ( 2007 ) concludes that there could be gains to global agriculture in the short run, but in the longer run the sector's output will decline. Few economic studies include potential CO 2 -fertilization bene fi ts that would cause crops and trees to grow faster. One reason is that there is much debate about the impact of CO 2 -fertilization. Thus, Cline ( 2007 ) takes it into account and attributes short-term gains in agricultural output to the fertilization effect, but, based on diminishing returns argument and the adverse effect of excessive warming, argues that agricultural yields will decline in the longer run. The increase in CO 2 during the twentieth century has contributed to about a 16% increase in cereal crop yields (Idso and Singer 2009 ) . Levitt and Dubner ( 2009 ) indicate that there will be a 70% increase in plant growth with a double CO 2 atmosphere. Research gathered by Michigan State University professor emeritus of horticulture, Sylvan H. Wittwer, indicates that, with a tripling of CO 2 , roses, carnations and chrysanthemums experience earlier maturity, have longer stems and larger, longerlasting, more colorful fl owers with yields increasing up to 15%. Yields of rice, wheat, barley, oats and rye increase by upwards of 64%, potatoes and sweet potatoes by as much as 75%, and legumes (including peas, beans and soybeans) by 46%. The effect of carbon dioxide on trees, which cover one-third of Earth's land mass, may be even more dramatic: According to Michigan State's forestry department, some tree species have been found to reach maturity in months instead of years when the seedlings were grown in a triple-CO 2 environment (Sussman 2010 , p.66) . People are much more interested in protecting mega fauna , such as elephants and tigers, than species such as burrowing beetles (van Kooten and Bulte 2000 ) . The polar bear ( ursus maritimus ) is touted as the most prominent species threatened by global warming. Polar bears are endangered by loss of habitat -by the decline in sea ice in the Arctic. It is argued that the bears need the ice to survive, because it is the platform from which they hunt seals, their primary food source. A picture of a lone polar bear on a small piece of ice fl oating in a vast sea (although the distance to the nearest ice fl ow or land cannot be determined from the picture, and such a distance could well be very short) serves to highlight the threat global warming poses. 6 6 Al Gore uses a picture of a mother polar bear and her cub on "an interesting ice sculpture carved by waves" that is available at (viewed March 3, 2011): http://www.whoi.edu/beaufortgyre/ dispatch2004/dispatch02.html to highlight the plight of the polar bear. The following quote accompanies the photo: "Currently we are traveling towards the 150 deg. longitude line to the next CTD and mooring sights, but had to slow down to a crawl because of an interesting and exciting encounter; a polar bear was sighted swimming off of the ship's port bow. It looked to be a juvenile, but is still considered to be very dangerous. Later on a mother and cub were also spotted on top of an extraordinary ice block." The quote is due to Kris Newhall, who also took the picture and regularly sent While other species and ecosystems are considered to be threatened by climate change, it is the polar bear that is most widely reported upon. Thus, we examine it in a bit more detail to determine what the potential cost of its demise might be. Let us begin by employing contingent valuation data from other mega fauna and apply it to the polar bear as a form of crude bene fi t transfer (discussed in the previous chapter). Information provided in van Kooten and Bulte ( 2000 , p.305) indicates that hunters were willing to pay $36.58 (1993 U.S. dollars) for a grizzly bear permit (that would permit the killing of one bear). Respondents to contingent valuation surveys indicated that they would be willing to pay between $15.10 and $32.94 annually to avoid the complete loss of bald eagles, and $12.36 to avoid disappearance of bighorn sheep. The largest annual values for preventing loss of any species were associated with the monk seal and humpback whale ($119.70 and $117.92, respectively) . There are a number of problems with these measures, including that the amount a person is willing to pay to protect a single species is not mutually exclusive -it is affected by the need to pay to protect other species, which is not asked in a survey -and respondents to a contingent valuation survey employ high discount rate s. With regard to the former, there might be an imbedding effectpeople respond as if they are protecting all wildlife species and not just the species in question. If people are asked about their willingness to pay to protect all wildlife species they might well provide answers that are close to those of a single species. Thus, they might be willing to pay $120 per year to protect all mega fauna, but, within that category, only $15, say, to protect humpback whales. Likewise, in responding, people do not envision paying the stated amount in perpetuity, usually assuming a period about the same length of time as a car loan. Thus, a 5% rate of discount would actually translate into an effective discount rate of 22%. On the other hand, if compensation demanded was used as opposed to willingness to pay to avoid a loss, then the amount involved might be some two or more times larger (Horowitz and McConnell 2002 ) . To fi nd out what people would be willing to pay to avoid the loss of polar bears , one fi rst needs to determine a suitable value. Given that the polar bear might have more in common with the humpback whale or monk seal than the bald eagle, we choose $120 per year, but then assume payment occurs for only 5 years, in which case each household would be willing to make a one-time lump sum payment of $600 in 1993 dollars; this amounts to $900 in 2010 dollars if U.S. in fl ation between 1993 and 2010 is used. If only North American and European households were willing to pay this amount, and that there are some 200 million households, then ensuring survival of polar bears is worth $180 billion, an enormous sum. However, this type dispatches from the Woods Hole Oceanographic Institute's 2004 Beaufort Gyre Expedition -the quote is from dispatch 2, August 7-8, 2004. There is no suggestion that polar bears are in any way threatened, but, rather, suggests that polar bear sightings as a common occurrence. The polar bears in the photo used by Al Gore were within swimming distance of shore. The ice was melting, but it always melts in summer. In the winter of 1973-1974, abnormally heavy ice cover in the eastern Beaufort Sea resulted in a major decline in polar bear numbers (as reported in Armstrong et al. 2008 , p.386) . Thus, both more ice cover as well as less cover appear to be harmful to polar bears. of calculation is dangerous because it is likely that people are willing to pay that amount to protect all mega fauna , not just polar bears. Alternatively, one might think of the $180 billion as a one-time payment to protect global biodiversity against loss due to climate change (given that biodiversity and mega fauna are closely related in such valuations). In that case, the sum might seem small. In that case, the guaranteed survival of polar bears might only garner a payment of several hundred million dollars, still a signi fi cant amount. Yet, there remain caveats. First, there is a chance that the polar bear will not go extinct, that some members will survive. Polar bears might adapt to conditions much as their counterparts to the south, only to become polar bears again when the globe cools and the Arctic ice returns to pre-anthropogenic global warming conditions. 7 Second, there is a chance that the Arctic ice does not disappear entirely and that suf fi cient numbers of bears continue to live on. After all, the minimum viable population required to prevent most large mammal species from going extinct may be rather small, even as low as 20-50 members (see van Kooten and Bulte 2000 , pp.199-201, p.281) . Finally, the polar bear may not only adapt to changing climate but may actually increase in numbers -the forecast that polar bears are disappearing and that the cause is climate change may simply be wrong (see Armstrong et al. 2008 ) . 8 Consider the question of sea ice . There have been several attempts by various environmental groups to show that sea ice is now so de fi cient that it might be possible to reach the North Pole by boat. 9 The truth is that substantial sea ice remains and may even be expanding. This is shown in Fig. 7 .2 : the area of sea ice declined from 2004, reaching its minimum extent during 2007; thereafter, the extent of sea ice seems to have increased, as indicated by the arrows in the fi gure. Even so, winter sea ice extent is well within historical averages, with only summer sea ice slightly lower than long term average. Historically, the Arctic is characterized by warm periods when there were open seas and the Arctic sea ice did not extend very far to the south. Ships' logs identify ice-free passages during the warm periods of 1690-1710, 1750-1780 and 1918-1940, 7 For example, polar bears appear to mate with grizzly bears and other large bears. 8 Armstrong et al. ( 2008 ) conduct a forecasting audit of studies used by the U.S. Fish and Wildlife Service to recommend listing of the polar bear as an endangered species under the U.S. Endangered Species Act, which was done in 2008. They fi nd that many principles of evidence-based forecasting are violated and that the grounds for listing the polar bear as endangered are unwarranted from a scienti fi c standpoint. 9 Paul Driessen and Willie Soon provide interesting accounts of public fi gures attempting to make journeys to the North Pole in the summer in order to publicize the demise of Arctic ice, only to be turned away by cold and ice after having barely started their journeys. See (viewed June 2, 2010): http://townhall.com/columnists/PaulDriessen/2010/05/01/ desperately_looking_for_arctic_warming?page = 1. although each of these warm periods was generally preceded and followed by colder temperatures, severe ice conditions and maximum southward extent of the ice (e.g., during 1630-1660 and 1790-1830). The IPCC WGI ( 2007 , pp.351-352) asserts with "high con fi dence that sea ice was more extensive in the North Atlantic during the nineteenth century" than now, although this would not be unexpected given that the Little Ice Age ended sometime between 1850 and 1870. Since there are no empirical measures of the extent of sea ice prior to the age of satellites, one must rely on written accounts, anthropological evidence, ships' records and so on. Clearly, there must have been little ice in the Davis Strait west of Greenland as the Vikings established colonies at Godthab (Nuuk), known as the Western Settlement, sometime at the beginning of the eleventh century or somewhat earlier (Diamond 2005 ) . The Swedish explorer Oscar Nordkvist reported that the Bering Sea region was nearly ice free in the summer of 1822, while Francis McClintock (captain of the 'Fox') reported that Barrow Strait (north of Somerset Island or northwest of Baf fi n Island) was free of ice in the summer of 1860, but had been completely frozen up at the same time in 1854. Even the famous explorer Roald Amundsen noted in 1903, during the fi rst year of his 3-year crossing of the Northwest Passage, that ice conditions were "unusually favorable." 10 In a highly speculative treatise of Chinese navigation, Gavin Menzies ( 2002 , pp.343-357) com- ments on the likelihood that the entire North may have been suf fi ciently free of ice circa 1422 to enable Chinese explorers to map the coast of Siberia. 11 Sussman ( 2010 , p.113 ) provides a photograph of the submarine, U.S.S. Skate , on the surface in icefree water at the North Pole in March 1959. 12 Russian scientists meantime continue to argue that the Arctic is getting colder and not warmer. 13 The point of these observations is that the extent of Arctic sea ice has fl uctuated over time. Research by the Norwegian Torgny Vinje (see Vinje and Kvambekk 1991 ) suggests that 1979 and 1981 may have been particularly bad years for sea ice, with 1976 comparatively ice free. The researchers point out that ice conditions are primarily driven by winds and ocean currents, that ice can accumulate at the rate of 20 cm/day resulting in a ice thickness of 6 m in a month, and that open water ('polynyas') can be observed on the leeward side of islands in winter 10-30% of the time in the summer-to-winter ice forming zones. Vinje and Kvambekk ( 1991 ) make three very relevant observations: First, the average area covered in ice in the Barents Sea in April during the years 1973-1976 was "about 700,000 km 2 and about 1,150,000 km 2 in 1969 and 1979, revealing a variation of as much as 400,000-500,000 km 2 in the annual maximum extension over a period of four years" (p.61). That is, the extent of sea ice in the Barents Sea in April was found to vary by as much as 65% within 4 years. While the dominant ice fl ow in the Arctic is the Transpolar Ice Drift Stream, which can bring 4,000-5,000 km 2 of ice (equivalent to the annual water discharge of the Amazon) into the Barents Sea from the Greenland Sea, although this clearly cannot account for the differences in sea ice coverage. Second, Vinje and Kvambekk ( 1991 ) found a downward trend in sea ice area over the 23-year period 1966-1988, as measured in late August. This trend amounted to a loss of average sea ice area of 5,400 km 2 /year. Clearly, this trend amounts to no more than 4-5% of the total change in sea ice that can easily occur over 4 years -a 23-year record is too short in this case to derive de fi nitive empirically-based conclusions. Finally, the researchers point out that ice of various ages gets mixed up as a result of wind, wave, tidal and ocean current factors. They conclude that much of the theory and science of ice formation still needs to be sorted out, an observation that remains valid some two decades later. Debates about the reasons for changes in Arctic ice rage on. In a recent paper, Wood and Overland ( 2010 ) attempt to explain why the Arctic ice sheet was noticeably diminished during the period 1918-1940 (as noted above). They conclude that the "early climatic fl uctuation is best interpreted as a large but random climate 11 Menzies cites, among others, Needham ( 1954 ) . 12 Also http://wattsupwiththat.com/2009/04/26/ice-at-the-north-pole-in-1958-not-so-thick/ (viewed March 8, 2012) . ." But it could just as easily be concluded that the early warming is best interpreted as a large but random climate excursion imposed on top of the steadily rising global mean temperature associated with Earth's natural recovery from the global chill of the Little Ice Age . Further, there is no reason not to conclude the same about the most recent Arctic warming, because, for example, White et al. ( 2010 ) , in an analysis of past rates of climate change in the Arctic, conclude that: "thus far, human in fl uence does not stand out relative to other, natural causes of climate change [italics added]." 14 Recent observations of the decline and subsequent increase in the extent of Arctic sea ice , both historically and more recently (as evident in Fig. 7 .2 ), are not unprecedented and cannot at this time be attributed to anthropogenic warming or some other known cause. The matter still needs to be resolved from a scienti fi c point of view. Current available data relating to polar bear populations are best considered inconclusive in terms of scientists' ability to state that polar bears are threatened by human-caused global warming. It can even be argued that polar bears are not in decline, but that their numbers may even be growing. There are some 20,000-25,000 polar bears in the world, with some 60% found in Canada. 15 This compares with some 5,000 polar bears in the 1950s. 16 Polar bears are divided into 19 subpopulations. According to a 2009 meeting in Copenhagen of the Polar Bear Specialist Group V of the International Union for the Conservation of Nature (IUCN), eight subpopulations are considered to be in decline, three are stable, one is increasing and there is insuf fi cient evidence to determine trends for the remainder. Canada's Western Hudson Bay population has dropped 22% since the early 1980s. Since the number of subpopulations considered to be in decline has grown from fi ve (at the group's 2005 Seattle meeting) to eight in 2009, some argue that polar bears are in decline as a result of global warming. However, the decline cannot be linked directly to global warming and certainly not to human emissions of greenhouse gases. 14 See http://www.nipccreport.org/articles/2010/dec/8dec2010a2.html (viewed December 13, 2010) for additional details. 15 http://www.polarbearsinternational.org/bear-facts/ (viewed February 25, 2010). 16 http://www.polarbearsinternational.org/ask-the-experts/population/ (viewed February 25, 2010). Note that both the reference here and that in the previous note are from a polar bear lobby group website. Because of improved monitoring and greater efforts at enumerating bear populations, the more recent fi gures are more accurate than the historical one, but this is not to suggest that populations in the 1950s were actually four to fi ve times greater than the estimates made by biologists at the time. The reason why polar bears are in decline, if at all, is hunting. Over a 5-year period to 2005, an average annual total of 809 bears were hunted; this covered 15 of the 19 regions for which information was available. 17 Hunting is permitted by government of fi cials in the various jurisdictions where polar bears are found. For example, with the permission of the Renewable Wildlife Economic Development Of fi ce of Canada's Northwest Territories, big-game hunters can purchase a nonresident hunting license and permit to kill a polar bear for $800 (plus 7% tax), although they will need to fl y to Inuvik or Tuktoyaktuk and pay unspeci fi ed guiding, out fi tting and trophy export fees; the overall cost could exceed $35,000 to bag a polar bear. 18 Therefore, if one feels that polar bears are threatened, it is much more ef fi cient to stop hunting than to reduce CO 2 emissions as hunting is a vastly greater threat to polar bears than global warming. Similar stories can be told of other species. Human hunting of polar bears , bald eagles (see 'Impacts on human health ' below), whales and other animals have proven a great threat to the survivability of many mega wildlife populations. Human development of the habitat of elephants, tigers, bison and other mega fauna have contributed to the demise of many species, and will likely continue to do so in the future (van Kooten and Bulte 2000 ) . The introduction of invasive species has also posed an enormous threat to many indigenous species, even causing some to disappear because they cannot compete. These three factors are a greater threat to wildlife populations than global warming. Yet, many wildlife species are extremely resilient, surviving and sometimes even fl ourishing when temperatures warm. In the case of polar bears, for example, one must ask: How did this species survive previous episodes when there was little Arctic ice, such as during the Medieval Warm Period? Undoubtedly, some species will not survive under some of the global warming scenarios that are envisioned, but it is not clear as to which climate outcomes will lead to the greatest loss of species. Further, it is not clear to what extent ecosystems will migrate, or simply disappear, or how quickly changes will take place. If the pace of ecosystem change is slow, many species will be able to survive, migrating with the ecosystem itself or adapting to new conditions. From the point of view of economic analysis, the ideal is to know which species are most in danger of extinction as a result of climate change and the value that global society attaches to their survival. This would require knowing the probabilities attached to various outcomes, the probabilities of each species' demise or survival under each climate scenario, and households' willingness to pay for each of the combinations of various outcomes relative to one another, which would also depend on how their incomes and other choice sets are impacted by warming. The point is this: It is impossible to determine the damages that global warming will impose on ecosystems and biodiversity . Attempting to do this as an exercise might be good fun, but it cannot lead to realistic estimates of climate-induced damages. Therefore, as indicated later in this chapter, economists employ much simpler damage functions in integrated assessment models . Several years ago, Sierra Club held a press conference in Victoria to draw attention to the perils of global warming. They showed that much of the city would be fl ooded if global warming was allowed to continue unabated. Scaremongering, they suggested that sea level would rise by some 100 m or more, which would change the map of Victoria dramatically. 19 Various studies have found evidence one way or another for changes in sea level (depending on location of the measurements and the time interval chosen), and there is no doubt that sea levels are rising. Indeed, sea levels have been rising ever since the last major ice age, but not as a result of anthropogenic emissions of CO 2 and other greenhouse gases. Thus, it needs to be demonstrated that sea levels are now rising faster than historically and that this is attributable solely to human activities. At this stage, one can only conclude that the science is highly uncertain, making economic estimates of potential damages from rising sea level even more so. What are the facts? During the twentieth century, sea level rose by some 1.7 ± 0.5 millimeters (mm) per year, or about 17 centimeters (cm) over 100 years. There is evidence suggesting that sea levels have been rising even faster in the past several decades. From 1961 to 2003, average sea levels rose by 1.8 ± 0.5 mm annually, but they rose by 3.1 ± 0.7 mm/year for the sub-period 1993 -2003 (IPCC WGI 2007 . This difference needs to be placed in proper perspective, however, because the latter measures are based on satellite altimetry observations, whereas the earlier measures are based on tidal gauges. Further, historical evidence indicates that rates of change in sea level vary considerably from one decade to the next, so it is impossible to determine whether the latest observed rates of increase are due to decadal variability or indicative of a longer-term trend, as noted by the IPCC report. Determining the causes of past sea level rise and what might cause it in the future is not an easy task. Three factors affect changes in sea level. First, as the ocean warms, it expands, causing the sea level to rise. According to the IPCC WGII ( 2007 , p.317), sea surface temperature s (SST) might increase by upwards of 3°C. Second, when continental glaciers melt, there is an increase in runoff into oceans and the sea level will rise accordingly. Melting of Arctic ice, for example, does not cause sea levels to rise because the ice fl oats on top of the water and, when it melts, contributes nothing to reduce or raise sea levels, as the effect of fl oating ice is already included in the current sea level. Unlike the previous ones, the third factor could lead to a reduction in sea level. As global warming occurs and the oceans themselves get warmer, there is greater evaporation and, as a consequence, greater precipitation. This causes a buildup of glaciers and a reduction in sea levels. It is not clear what in fl uence each factor has had on past sea levels and what it will have on future sea levels. Data for the period 1961-2003 attribute 0.42 mm/year (23.3%) to thermal expansion of the oceans and 0.69 mm/ year (38.3%) due to loss of mass from glaciers, ice caps, and the Greenland and Antarctic ice sheets; for the period 1993-2003, 1.6 mm (51.6%) was considered to be due to thermal expansion of the oceans and 1.2 mm (38.7%) due to ice melt (IPCC WGI 2007 , p.419). Thus, upwards of 40 % or more of the observed increase in sea level rise cannot be explained, indicating that something else must be going on. In addition to these factors, the distribution of water between oceans causes some areas to experience a higher increase in sea levels than other areas, with some even experiencing decline. The cause for this 'redistribution' of water is attributed to various factors including the Paci fi c Decadal Oscillation (PDO), the atmosphericdriven North Atlantic Oscillation (NAO ), the El Niño -Southern Oscillation (ENSO) , which occurs on average every 5 years but varies from 3 to 7 years, and ocean currents (IPCC WGI 2007 , pp.416-417) . Church and White ( 2006 ) , for example, found that there was a slight acceleration in sea level rise of 0.013 ± 0.006 mm/year 2 throughout the period 1870-2004 (i.e., each year the rate of increase in sea level rise would increase by 0.013 mm). If this rate of increase continued to 2100, sea levels would rise by 28-34 cm. In a more recent study, Siddall et al. ( 2009 ) use a temperature-sea level model based on 22,000 years of data to predict sea-level rises of 7-82 cm by the end of the twentyfi rst century for respective increases in temperatures of 1.1 and 6.4°C. However, in a retraction (Siddall et al. 2010 ) , they point out that, as a result of unforeseen errors related to the size of their time step for the twentieth and twenty-fi rst centuries and failure to account for the rise in temperatures consequent upon coming out of the Little Ice Age , the projected increases in sea level for the period to 2100 are overstated (although their simulations for the remaining periods remain valid). Sea levels are forecast by the IPCC WGI ( 2007 , p.750) to rise by 18-59 cm, or by somewhat more in a worst case scenario by 2100. This translates into an increase of 1.8-5.9 mm/year, implying that the rise in sea level in the next century will be similar to that experienced in the past century to as much as 3.3 times higher than that of the past century, depending on whether average global temperatures rise by a projected 1.2°C or 4.0°C. Certainly, despite fears to the contrary, the projected increase in sea level is manageable. For example, during the 1960s, the city of Hamburg in Germany experienced an increase in storm surges of more than half a meter as a result of a narrowing of the Elbe River. 20 The city easily countered this by building dykes -a simple solution. Where do the fears of unprecedented sea-level rise originate? These originate with the possible collapse of the largest mass of ice in the world -the Western Antarctic Ice Sheet (WAIS). Doomsday scenarios postulate the sudden collapse of the WAIS, which would lead to increases in sea level measured in meters rather than centimeters, although the projection would be an increase in sea levels of about 5 m 7.1 The Climate Damages Landscape (IPCC WGI 2007 , pp.776-777) . However, there is no evidence that the WAIS collapsed in the past 420,000 years, despite temperatures that were signi fi cantly higher than any experienced in human history, and there is no scienti fi c basis to fear that a collapse of this ice sheet is imminent as a result of projected global warming (Idso and Singer 2009 ) . Turner et al. ( 2009 ) fi nd that, despite rising mean global temperatures and rising atmospheric CO 2 , "the Antartica sea ice extent stubbornly continued to just keep on growing." Likewise, the IPCC WGI ( 2007 , pp.818-819) indicates that there is little likelihood that the WAIS will collapse sometime during the twenty-fi rst century. The other major concern is the Greenland Ice Sheet. In this case, the IPCC indicates that the melting caused by higher temperatures will exceed additions due to increased snowfall, perhaps with the rate of net melting increasing over time, but this will occur slowly over the next several centuries. The expected increase in global sea levels due to total melting of the Greenland Ice Sheet is about 7 m (IPCC WGI) ( 2007 , pp.818-819) . There is a great deal of uncertainty regarding the potential extent of sea level rise because of the dynamical behavior of ice sheets. In essence, too little is known about the processes working inside the large ice caps, and the impact of increased precipitation resulting from warmer temperatures, for scientists to make de fi nitive statements about sea-level rise. Thus, increases in sea levels experienced in the past plus observed rates of increase are the best predictors of future sea-level rise. These suggest that potential future increases in sea levels will be manageable. Even unprecedented loss of ice sheets over the next century is very unlikely to raise sea levels by more than a few meters, and certainly not the 100 m envisioned by some environmental groups. The best study of sea level rise was conducted by the PALSEA (PALeo SEA level) working group (Abe-Ouchi et al. 2009 ) . Given that the sea-level rise predicted for the twenty-fi rst century is considered one of the greatest potential threats from climate change, with the absolute worst-case scenarios varying between 0.59 and 1.4 m, PALSEA asks whether runaway sea level rise is likely. Based on information about sea level rise at the concluding years of the last glacial period, Abe-Ouchi et al. ( 2009 ) conclude that, if climate models are correct in their temperature projections, sea levels will rise quickly in the early part of the twenty-fi rst century, but then level off to a much smaller increase. According to these experts, sea levels are certainly not expected to rise exponentially, as suggested by many climate change alarmists (including Greenpeace ). 21 Finally, as was the case with temperature data (Chaps. 2 and 3 ), there is evidence that contradicts the notion that sea levels are rising, or at least rising as quickly as indicated. 22 For example, a study of sea level trends on 12 Paci fi c Islands found that cyclones and tsunamis induced false readings that should have been ignored when 21 See also http://thegwpf.org/science-news/1837-no-cause-for-alarm-over-sea-level-or-ice-sheets. html (viewed November 14, 2010) . 22 There is much debate about sea levels in Australia and the South Paci fi c. Some is clearly rhetoric, but some is also based on empirical evidence. See http://www.warwickhughes.com/blog/?p=283 and http://www.bom.gov.au/paci fi csealevel/ (June 1, 2010). The latter is a good source of data. calculating a trend. Further, these extreme weather events also disrupted the leveling of equipment. As a result, readings from years characterized by cyclones and tsunamis, and until the equipment could be tested and recalibrated, affected the calculation of trends. When the effects of extreme weather are taken into account, the measured rise in sea levels disappears, as illustrated in Table 7 .1 . It is impossible to attribute extreme weather events to global warming. The Fourth Assessment Report of the Intergovernmental Panel on Climate Change argues as follows: "Single extreme events cannot be simply and directly attributed to anthropogenic climate change, as there is always a fi nite chance that the event in question might have occurred naturally. However, when a pattern of extreme weather persists for some time, it may be classed as an extreme climate event, perhaps associated with anomalies in SSTs (such as an El Niño )" (IPCC WGI 2007 , p.310) . Notice that the IPCC leaves room to interpret extreme weather events as attributable to global warming, presumably the result of human emission of greenhouse gases. Yet, rather than making a statement that rules out a link between climate change and extreme weather events, the IPCC prefers to leave open to interpretation the potential that any single extreme weather event is part of a pattern that could be attributed to global warming. 23 23 In the Third Assessment Report of 2001, the IPCC points out that there is no evidence of increased storm events (IPCC WGI 2001 , pp.162-163, p.664 ) and, despite progress in climate modeling , current GCM models are not up to predicting increased future storm or weather events (IPCC WGI 2001 , pp.573-575) . It is unlikely that much has changed in the intervening 6 years to convince scientists otherwise (although see below). ( 2009 ) Few reasonable scientists would attribute individual weather events, whether extreme or not, to global warming simply because it is impossible, as the IPCC recognizes, to determine whether the event would not have occurred had there been no change in climate whatsoever. Consider the probability that a particular weather event occurs, say a category 4 hurricane weather event (see Table 7 .2 ). Suppose that, on average, 40 storms develop in a particular hurricane season (June 1 through November 30) in the North Atlantic and that the current probability of a storm of a particular intensity is given in the second to last column of Table 7 .2 . Thus, only 40% of storms develop into hurricanes, many of which are unlikely to make landfall. Using these assumptions, in any given hurricane season, one expects there might be one storm that reaches category 5, between fi ve and six that attain a category 3 or 4 rating, four category 2 hurricanes, six category 1 hurricanes, and some 24 other tropical storms or depressions. This is more than found in the usual hurricane season, and, of course, it is important to remember that a large proportion may never make landfall. The effect of rising temperatures on the number of tropical storms and hurricanes, and their intensity, is unknown. (We consider this in more detail below.) Suppose, however, that rising temperatures cause the above probability distribution to shift slightly so that more intense hurricanes appear more frequently. Further, assume that the probability of a storm occurring increases so that, rather than an average of 40 storms per year, now 45 storms are expected. In that case, in any given hurricane season, one expects there might be one storm that reaches category 5, between six and seven that attain a category 3 or 4 rating, between four and fi ve category 2 hurricanes, between six and seven category 1 hurricanes, and some 26-27 other tropical storms or depressions. Yes, there is a slight increase in nearly every category of weather event, but it would require many years of observations to determine whether any given weather event had been drawn from one of the following three probability distributions: (1) the original distribution where the mean number of storms events per year was 40; (2) the original distribution but with a mean of 45 annual storm events; or (3) the after-warming probability distributions, with an annual average of 45 storm events and distribution of storm events given by the last column of Table 7 .2 . It is a nearly impossible to attribute any single extreme weather event and even a sequence of events to anthropogenic climate change; there is simply insuf fi cient information about how global warming impacts the above types of probability distributions. A 10-year moving average of the number of hurricanes and tropical storms arising in the North Atlantic Ocean from 1851 through 2009 is provided in Fig . 7 .3 . 24 Also provided in the same fi gure is a 10-year moving average of the Atlantic tropical storms and hurricanes coming within 50 nautical miles of the U.S. coast or actually striking the U.S. Interestingly, the number of Atlantic storms rose between 1850 and about 1900, falling back to the earlier number by about 1915. Numbers rose during the 1920s and 1930s, only to level off until 1995, when storms appeared to increase rapidly for some 10 years, and then begin to fall to the end of the record (2009); the number of storms in the Atlantic appears to track increases in global temperatures for at least part of the record, as can be seen by comparing the dark line in Fig. 7 .3 with the 10-year moving average of global temperatures in Fig. 2 .5 . A similar pattern is observed for the number of storms striking the U.S. or, at least, coming within 50 nautical miles of the coast, and thus having some, perhaps only 24 Source of data for Figs. 7.3 , 7.4 , and 7.5 : NOAA Coastal Services Center at (viewed June 23, 2010): http://csc-s-maps-q.csc.noaa.gov/hurricanes/download.jsp . Also see Davis et al. ( 1984 ) for explanation of data. minor, impact. This variable is indicated by the thin line in Fig. 7 .3 . For storms affecting the U.S., however, it is much more dif fi cult to discern a trend that might be related to climate change. The reason for the difference might be attributable to the fact that the reported increase in Atlantic storms is the result of better measurement methods, including the use of satellites, rather than more storms per se. Hurricane Katrina was downgraded to a category 3 by the time it struck and devastated much of New Orleans on August 29, 2005 (killing 1,833 people and causing more than $100 billion in damages). It was seen as a harbinger of more frequent and fi ercer storms to come, all to be attributed to anthropogenic climate change. In Fig. 7 .4 , we provide a plot of all tropical storms and hurricanes that actually made landfall in the United States, and a plot of only category 3, 4 and 5 hurricanes to make landfall; both plots are based on 10-year moving averages. From the fi gure, it is clear that hurricanes affected the U.S. more frequently in the period 1890-1970 than thereafter. In the earlier period, there was an average of 1.84 hurricanes per year, while the average after 1970 was 1.56 (and 1.70 for the 20 years 1990 through 2009). The number of really severe hurricanes (Category 4 and 5) declined from an average of one every 5 years during 1890-1970 to one every 12 years thereafter. Clearly, there is no discernable trend over the more than 150 years of data, and it is impossible to attribute hurricane events, such as Katrina to anthropogenic global warming. Of course, based on Fig. 7 .3 , more of the tropical storms and hurricanes could have struck Caribbean islands, Mexico or countries of Central and South America, but historical data regarding such events are not available. We turn now to cyclones in the eastern and central Paci fi c Ocean for additional information on trends in storminess. In Fig. 7 .5 , we plot the annual numbers of To determine the impact of global temperatures on storm events, we regress the number of storms in each year on the annual global HadCRUT3 temperature series (which runs from 1850 through 2009) and on year, where year is used to capture a secular trend independent of temperature (e.g., from better observations of off-shore storms). The results are provided in Table 7 .3 for two regression models. If Atlantic and Paci fi c storms are considered to be related because of common factors, such as an El Niño , this is taken into account by estimating the two storm equations simultaneously, assuming that the error terms are correlated (see Greene 2008 ) . In that case, we can only use 61 of the 159 annual observations for the North Atlantic because there are only 61 annual observations of storms in the Paci fi c. We also employ an independent, single-equation linear regression model for each of the Atlantic and Paci fi c storms. Notice that, when the equations are estimated simultaneously, the estimated coef fi cients do not change, but their estimated standard errors (provided in parentheses) are smaller, indicating a higher level of con fi dence in the estimated value (as evidenced by a lower probability in the square brackets). The results indicate that storms in the North Atlantic are positively correlated with higher temperatures, as measured by the HadCRUT3 temperature data. The estimated 25 Source: http://csc-s-maps-q.csc.noaa.gov/hurricanes/download.jsp (viewed June 23, 2010). Earlier data are not available. 1949 1951 1953 1955 1957 1959 1961 1963 1965 1967 1969 1971 1973 1975 1977 1979 1981 1983 1985 1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 2009 Total Cross U.S. Hurricane Force Total storms 10-year moving average (total storms) Hurricane force storms coef fi cient for the period 1851 through 2009 is statistically signi fi cant at the 1% level of signi fi cance, but the statistical signi fi cance of the coef fi cient drops to a signi fi cance probability of more than 10% for the period 1949-2009. At the same time, the estimated coef fi cient falls, indicating that the effect of increasing temperature is smaller for the period 1949-2009 than for the entire period 1851-2009. This is surprising when one compares this result to Fig. 7 .4 , but it comes about because of the secular time trend, which is more pronounced in the latter period as opposed to the former. Further, even though the model is appropriate (as determined by the goodness of fi t statistics), it explains less than 15% of the variation in storm activity for the period 1949-2009 and less than 23% for the period 1851-2009. Clearly, factors other than temperature are affecting storm formation in the North Atlantic. If we turn to events in the Eastern and Central Paci fi c Ocean, we fi nd that, for the period for which data are available , storm events are inversely correlated with temperature once adjusted for secular trends. The model with temperature and trend as regressors explains nearly 35% of the variation in storm activity. Further, the inverse effect of temperature on storm activity in the Paci fi c is highly statistically signi fi cant in both the OLS and SUR models. When we regress storms that affected the United States on temperature and trend, we fi nd no statistically signi fi cant relation whatsoever, which is why these results Notes: a Two models are estimated: linear or ordinary least squares (OLS) regression, and seemingly unrelated regression (SUR), where the error terms in the two equations are assumed to be correlated. Regression was conducted using Stata 10 using the 'surge' and 'regress' functions. An explanation of the regression models can be found in Greene ( 2008 ) , for example. The standard error of the estimated coef fi cient is provided in round parentheses and the associated probability in square brackets b The Chi-square statistic ( c 2 ) measures overall goodness of fi t of the estimated model for the SUR regressions, while the F-statistic does the same for the OLS regressions are not reported. The same is true if we look only at category 3, 4 and 5 hurricanes impacting the U.S. That is, neither temperature nor year could explain storm activity affecting the U.S. Atlantic coast. This is also evident from Fig. 7 .4 . In conclusion, despite our fi nding that rising global temperatures appear to affect storm activity in the North Atlantic, there is no similar evidence for this from the Paci fi c Ocean, and there is no evidence that the number of storms impacting the U.S. has increased as a result of climate change. Further, the regression analyses indicate that other factors not considered here are more important determinants of storm activity. Overall, however, one must conclude that there is no convincing evidence that extreme weather events are impacted by climate change. More information must be gathered before any such conclusion can be reached, and that might well take another 50 years of observations. Nonetheless, two studies recently examined the impact of human activities on the incidence of extreme precipitation events using output from climate models (Min et al. 2011 ; Pall et al. 2011 ) . Using Hadley Centre , grid-point data on 1-day and 5-day precipitation accumulation events for 49 years and precipitation outputs from an ensemble of climate models for the same gridpoints and years, Min et al. ( 2011 ) fi nd that the model outputs track actual precipitation extremes rather closely when climate model outputs are based on CO 2 than when CO 2 is absent from such simulations. The authors conclude that this shows human activities are responsible for extreme weather events. Pall et al. (2011) consider the probability that fl oods in England and Wales during autumn 2000 were the result of anthropogenic climate change. The authors used forecasts from the Hadley climate model to obtain temperature and precipitation forecasts; these forecasts were fed into a precipitation-runoff model to simulate daily river runoff and the potential and magnitude of fl oods. The climate model was run using sea surface temperature s, atmospheric greenhouse gas levels and sea ice levels found in year 2000 and again with conditions as they existed (or were presumed to exist) in 1900. In each case, the climate model was run several thousand times for a full year, with runs differing according to their starting values. The authors concluded: "The precise magnitude of the anthropogenic contribution remains uncertain, but in nine out of ten cases … results indicate that twentieth century anthropogenic greenhouse gas emissions increased the risk of fl oods occurring in England and Wales in autumn 2000 by more than 20%, and in two out of three cases by more than 90%." Although interesting, this research is certainly not conclusive for several reasons. Climate models have been calibrated so that they can replicate the recent past. As noted in Chap. 5 , this makes such models less suitable for predicting the future but, because they can replicate the past, it is not surprising that they provide a decent tracking of past precipitation. Earlier in this chapter we indicated that sea ice varies considerably and obtaining information on its extent in the early 1900s is fraught with uncertainty. The same holds with sea surface temperature s, which are in fl uenced by climate events such as El Niño . Thus, it is anyone's guess as to what sea ice and temperature were in 1900. Finally, climate model replications of the past cannot substitute for actual observations. To determine the effects of human activities on the risks of high precipitation events and fl ooding, it is necessary to use observed and not simulated data. Records from the UK Met Of fi ce show no upward trend in UK rainfall between 1961 and 2004. While autumn 2000 rainfall was unusual, it was exceeded in 1930, while 1768 and 1872 were wetter than 2010. Nor do historical UK precipitation data provide evidence of an upward trend. 26 Likewise, Chu et al. ( 2010 ) found that extreme precipitation events in Hawaii were declining, while Xie et al. ( 2010 ) found that there was no trend in the size of hail stones in China . Larger hail stone sizes are indicative of extreme weather events. Meanwhile, Czymzik et al. ( 2010 ) used data from lake bed sediments to determine that fl ood events in Germany actually declined over a 450 year period; indeed, the worst fl ooding occurred during cold periods and not warm periods. Frustrated by the lack of evidence for a human responsibility in bringing about climate disasters, Bouwer ( 2011 ) pleads for the use of models rather than actual evidence: "Lacking signi fi cant impact from anthropogenic warming so far, the best way to assess the potential in fl uence of climate change on disaster losses may be to analyze future projections rather than historical data." Health is potentially an area where there might be signi fi cant damages from global warming. Global warming poses a threat to human health primarily because of the projected spread of malaria and other tropical diseases. However, tropical diseases are a problem of economic development and preventative health, not of rising global temperatures. The West Nile virus has spread into cold regions (including Canada), while malaria killed thousands of Russians even at the Arctic Circle in the 1920s. Both Canada and the United States experienced malaria as late as the 1950s, while the last cases in The Netherlands occurred in the 1970s (Spielman and D'Antonio 2001 , pp.116-137) . 27 Malaria was eradicated in northern countries not because the mosquitoes carrying the disease could no longer breed in those countries, but because countries were able to treat people with malaria using quinine, drain swamps where malaria-carrying mosquitoes bred, and spray chemicals in areas with the highest malarial incidence (Spielman and D'Antonio 2001 ) . Indeed, malaria has been in recession in many locations. That is, despite rising global temperatures over the past century or more, the range of malaria shrunk because of economic development and disease control (Gething et al. 2010 ) . Mosquitoes and malaria know no boundaries, even without global warming. Malaria remains a deadly disease, infecting annually nearly 500 million people while killing between two and three million, mainly children. In developing countries, mosquitoes carrying malaria are best controlled using a chemical discovered in 1939dichloro-diphenyl-trichloroethane , commonly known as DDT. 28 DDT spraying began in earnest in 1958 and, despite its success in having eradicated malaria in developed countries by about 1967, was subsequently banned worldwide in 1973 because of the dangers it posed, having been discovered in breast milk and thought to be linked to declining numbers of bald eagles. 29 Although a persistent organic pesticide, the main problem with DDT may have been its indiscriminant use. In developing countries, indiscriminant use without care to prevent re-introduction of mosquitoes from non-treated areas reduced the effectiveness of spraying programs and caused mosquitoes to develop some resistance to the chemical. DDT use has now been permitted since 2000, but lobbying against its use continues, making it dif fi cult to implement effective programs. The chemical can be used effectively if applied to the walls of homes and mosquito nets, but it may take a long time to bring the parasite under control, let alone eradicate it -and then only if countries work in concert. Malaria has little if anything to do with climate change. Rather, as noted above, it is a problem of development and health care. This is not to say that global warming will have no impact on malaria , but it will be dif fi cult to discern the effect of climate change on the disease in relation to other factors, as non-climate factors have profoundly confounded the relationship between geographical climate and malarial outbreak; indeed, the relationship between climate and malarial endemicity 28 Dr. Paul Hermann Müller discovered the insecticide qualities of and patented DDT, for which, in 1948, he won a Nobel Prize in Medicine (Fisher et al. 2003 ) . DDT was effective against malaria and yellow fever. The 1973 ban on DDT is considered by some to have led directly to the death of more than 90 million people, if one assumes some 2.5 million people per year die of malaria. This is the legacy some have attributed to the 1962 best-selling book, Silent Spring , by Rachel Carson who was not a scientist but an excellent writer. 29 The link between DDT and declining populations of bald eagles has subsequently been proven false. First, it appears that bald eagles were nearly hunted to extinction, with concern expressed as early as 1921. A total ban on hunting was put in place in the U.S. with the Bald Eagle Protection Act (1940) . As a consequence populations began to rise, with Marvin ( 1964 ) reporting that overall bird populations had risen by 25% between 1941 and 1960, including the particularly vulnerable robin despite many years of DDT spraying. Various studies indicated that DDT did not have an adverse impact on bald eagles, nor was there evidence that thin egg shells were correlated with DDT (see Coon et al. 1970 ; Reichel et al. 1969 ) . Cromartie et al. ( 1974 ) found that, in a sample of 37 bald eagles found dead in 1971-1972, 13 had been shot and 13 had died of insecticide poisoning (dieldrin and thallium), but no deaths were directly attributable to DDE ( Dichloro-diphenyldichloro-ethylene ), which is a form of DDT. Likewise, Belisle et al. ( 1972 ) found that, in a sample of 39 bald eagles found dead in 1969-1970, only one died from DDE, 18 had been shot and 6 died from dieldrin, an alternative insecticide to DDT. See also E.J. Gordon and S. Milloy, "100 Things You Should Know about DDT" at http://www.junkscience.com/ddtfaq.html (viewed May 25, 2010 ). Yet, a Google search of DDT and bald eagles fi nds that the majority of websites (mainly associated with environmental groups) maintain that DDT almost led to the demise of the eagle, when the scienti fi c literature attributes the cause to legal and later illegal hunting. 7.1 The Climate Damages Landscape has effectively been decoupled (Gething et al. 2010 ; Spielman and D'Antonio 2001 ) . This makes economic estimates based on coupled climate-biology models of potential damages from malaria in a warmer world extremely speculative, and certainly orders of magnitude higher than warranted by empirical observations (Gething et al. 2010 ) . Similar comments can be made about dengue fever and other diseases and parasites that are also spread by mosquitoes. While respiratory ailments could increase as a result of global warming, new threats to health, such as severe acute respiratory syndrome (SARS), do not necessarily need to be related to global warming -they are more a matter of globalization than climate change. Many diseases and pests (such as West Nile virus) are spreading regardless of climate. It is also not at all clear that current tropical diseases and pests will increase their range as a result of global climate change, except maybe in the developing countries, which cannot cope because they lack public health infrastructure -they are poor and the remedy is to increase their incomes, not to rely on mitigation to prevent higher incidence of disease. The question here is whether funds currently meant to mitigate greenhouse gas emissions can be better spent in Africa, say, improving the public health care infrastructure and fi ghting AIDS, improving access to quality drinking water, or simply providing DDT to apply to walls and netting to reduce incidence of malaria . Indeed, the Copenhagen Consensus concerning the world's biggest problems found that AIDS and water quality improvements, and several other issues facing global society, were greater problems than climate change (Lomborg 2007b ). If the principal objective of climate change policy is to help poor countries and future generations, a better strategy might be to direct funds spent on mitigation by industrial countries to improve incomes in developing countries. This is the position taken by Lomborg ( 2007a, b ) and others. In terms of empirical studies of the costs to health of climate change, Moore ( 1998 ) estimates that an average global temperature increase of 4.5°C will yield some $30-$100 billion in health bene fi ts (not losses) to U.S. residents. Goklany ( 2008 Goklany ( , 2009 ) also reports that global warming actually reduces mortality rates as fewer people will die from exposure to cold temperatures. It seems that people are better able to cope with warmer temperatures than colder ones. The summer of 2003 was a particularly warm one. In Europe, many deaths were attributed to the heat. However, as elsewhere, people whose death is attributable to heat are generally the elderly and weak, who are more than likely to have died from other causes in the following months. This is not to deny the value of their lives, only that exceptionally warm weather may simply have been a factor triggering a mortality that was inevitable within the next several months. This hypothesis can be veri fi ed empirically by comparing incidents of death among various age categories before, during and after a period of exceptional heat. The same is true of cold periods. By controlling for access to central heating, air conditioning, age, health status, and other non-climate factors, it is possible to determine the effect that climate (unusual cold or heat) has on mortality. In Tables 7.4 and 7.5 , we provide data on deaths from various weather related events. Prior to 1989, droughts were by far the most important contributing factor to mortality, mainly because poor countries were least able to cope with drought . Floods were the second most important weather-related cause of death, followed by windstorms, again because developing nations are least able to prevent such natural disasters. Extreme temperatures ranked sixth out of seven weather-related events as a contributor to mortality. For the period 1990 through 2006, death rates from all weather-related causes dropped dramatically as nations learned how to cope with severe weather events and as a result of relief efforts by rich countries. Annual mortality from drought and fl oods fell signi fi cantly, while it rose for the other fi ve weather events; however, annual death rates fell for all categories, with the exception of extreme temperature events. Global average annual death rates fell by some 7-99 %, but rose by several 100 % in the case of extreme weather events. While one might draw the conclusion that extreme temperature events refer to heat waves, such as the one in Europe in 2003, it turns out that more people die from extreme cold than heat, as indicated in Table 7 .5 ; almost twice as many people in the U.S. died from extreme cold than died from extreme heat over the period 1979-2002. Further, based on U.S. data, weather-related mortality is extremely low, with severe weather events accounting for less than 0.06% of U.S. deaths during 1979-2002. Given that Arizona and Nevada have been the fastest growing states in the United States, it appears that people express a preference for living in warm (even hot) and dry climates. Empirical measures of the values of these amenities are generally lacking. Maddison and Bigano ( 2003 ) found that, in Italy, higher summer temperatures are regarded negatively as are lower January temperatures and higher January precipitation. Rehdanz and Maddison ( 2009 ) use a hedonic pricing model to determine that Germans prefer warmer and drier winters; however, they could fi nd no statistically signi fi cant gain or loss to Germans from IPCC-projected changes in climate. Lise and Tol ( 2002 ) found that people have a preference for warmer climates, as evidenced by their choices regarding vacation destinations. One can only conclude that there exists no fi rm information about the economic effects of climate on health and amenity values -any conclusions are speculative at best. Hamilton and Tol ( 2007 ) use an econometric simulation model to show that, under climate change, tourism in Ireland and the UK would shift northwards, while in Germany it would shift towards the south. Initially, the UK and Ireland would lose some international tourists but gain domestic ones, but as climate change continues there would be a growth in international tourists as northern Europe warms. Some researchers have regressed countries' GDP levels on their mean temperature and a variety of control variables, including the latitudes of capital cities (as a control variable to account for differences in development opportunities between countries). For example, Choiniere and Horowitz ( 2000 ) regress per capita GDP on average temperature for 1980, 1985 and 1990 using a double-logarithmic functional form. Their conclusion is that the effect of temperature has become more pronounced over time, not less. That is, they fi nd that the world might become more vulnerable to changes in climate over time. Using a similar approach, Horowitz ( 2001 ) further reports that a 3 °F (1.67°C) increase in temperature leads to a 4.6% decline in global GNP. There are several problems with this analysis. First, as the authors themselves point out, average temperature taken at the capital city of a country may not be representative of the average annual temperature for the country as a whole. Second, and perhaps more important, the authors neglect the fact that temperatures in a given year, or even over a decade, may not be representative of the actual temperatures that the country/region has historically experienced and might experience in the future. Climate is not the same as weather, nor is climate variability the same as weather variability. Average temperature in any given year may be an anomaly, as may the change in average annual temperatures over any 5-or 10-year interval. Further, for many regions, it is not the average annual temperature that is most important. Regions that experience large differences between summer and winter temperatures incur higher costs from such things as increased road and other infrastructural repairs plus heating/cooling needs. Finally, there is no causal mechanism, precipitation is ignored, and amenity values are neglected. In particular, citizens in some countries might simply desire warmer and drier weather. With some exceptions, economists take the view that meteorological, atmospheric and ocean science are outside their realm of expertise, and they accept without quali fi cation the science of climate change -that human emissions of greenhouse gases cause global warming and that, if we want to stop warming, we need to control such emissions. Economists then attempt to balance the costs and bene fi ts of climate change, focusing on what an optimal economic response might look like. William Nordhaus of Yale University has led the way by developing an integrated assessment model to guide policy makers (Nordhaus 1991 (Nordhaus , 1994 (Nordhaus , 2008 . He summarizes his position and that of most economists as follows: "Global warming is a serious, perhaps even a grave, societal issue [and] there can be little scienti fi c doubt that the world has embarked on a major series of geophysical changes that are unprecedented in the past few thousand years. . … A careful look at the issues reveals that there is at present no obvious answer as to how fast nations should move to slow climate change. Neither extreme -either do nothing or stop global warming in its tracks -is a sensible course of action. Any well-designed policy must balance the economic costs of actions today with their corresponding future economic and ecological bene fi ts" (Nordhaus 2008 ) . One method used by economists to determine what policies to pursue in addressing climate change is the use of integrated assessment models (IAMs), which seek to balance costs and bene fi ts of taking action. An integrated assessment model is essentially a constrained mathematical optimization model. The present (discounted) value of social wellbeing is maximized subject to dynamic and static constraints that represent the potential damages, production possibilities, and interactions among markets and world regions. The objective function includes the sum of consumer and producer surplus es (recall Chap. 6 ), the potential damages from global warming as a function of temperature, and costs of mitigating climate change. Damages are a function of temperatures, which, in turn, are a function of the level of greenhouse gas (CO 2e ) emissions in each period of the model. The level of CO 2e emissions in each period is affected by the technology, which is generally determined outside the model, and a carbon tax that provides incentives to reduce emissions. The level of the carbon tax and its rate of increase are determined endogenously as the global economy seeks to optimize the objective function by reducing emissions and the amount of tax to be paid. Different climate scenarios can be examined by varying assumptions concerning the technology (CO 2 emissions per unit of output), growth in population, and so on (see Chap. 4 ). In a series of books and articles, Nordhaus developed two IAMs that have been used by many economists to examine the costs of climate change and the optimal level of mitigation (Nordhaus 1991 (Nordhaus , 1994 (Nordhaus , 2008 ) -DICE (Dynamic Integrated model of Climate and the Economy) and RICE (Regional dynamic Integrated model of Climate and the Economy). To obtain some idea of what this involves, consider the following outline of the structure of DICE (Nordhaus and Boyer 2000 ) . The objective is as follows: where N ( t ) is the global population at time t , c ( t ) is per capita consumption over period t , and d ( t ) is the social rate of time preference or discount factor for period t . Since the time step is 10 years, the discount rate represents a 10-year rate; further, the discount rate is assumed to differ from one period to the next, although it could also be kept constant or fall over time (see Chap. 6 ). Population is also modeled to grow over time, although it too can be held constant or adjusted so that it falls over time, or rises and then falls. These are assumptions in the model and each is represented by one or more equations (that are not shown here). A Cobb-Douglas or double-logarithmic production function is assumed, but it is adjusted by climate factors. It takes the following form in DICE : where Q ( t ) is output (global GDP), A ( t ) is total factor productivity or technology in period t , K ( t ) is the total capital stock at time t , D ( t ) are climate-related damages as a fraction of net output, m ( t ) is the industrial emission control rate, b 1 ( t ) is the coef fi cient on the control rate in the abatement cost function (which changes over time), b 2 is the exponent on the control rate (a parameter that is fi xed over time), and g is the elasticity of output with respect to capital. A key equation is the damage function: where damages increase as a quadratic function of global mean temperature, T( t ), at time t and q 1 and q 2 are parameters of the damage function. The parameters are calibrated ('guessed at') rather than statistically estimated because data are lacking. Human emissions of CO 2 , denoted E ( t ), are a function of output, the base-case ratios of industrial emissions to output, s ( t ), and the industrial emissions control rate: Total consumption in a given period, C ( t ), is determined by output in that period, Q ( t ), minus investment in maintaining and/or enhancing the capital stock, I ( t ), and minus the amount paid as carbon tax es: where t (t) is the tax on emissions ($ per tCO 2 ) in period t , although ( 7.5 ) can be modi fi ed so that the term t ( t ) E ( t ), or total tax paid, is expressed as a cost of purchasing permits. The tax rate or level of emission permits constitute policy variables in the IAM . Remaining constraints deal with the change in the capital stock from one period to the next (taking into account deterioration of the capital stock and new investment), the rise in temperatures as a function of human emissions, the release of CO 2 from changes in land use, CO 2 emissions from the oceans, previous temperatures, and radiative forcing parameters representing various factors that contribute to the buildup or drawn down of CO 2 in the atmosphere. Including initial conditions, there are more than 30 constraints in the model, although the actual number of constraints is closer to 300 as a result of incrementing time over ten periods. RICE has signi fi cantly more constraints because many DICE -equivalent constraints apply to individual regions; there is the need to include region-speci fi c parameters, such as for the production function ( 7.2 ) and emissions function ( 7.3 ). In addition, aggregation constraints enter into the mix. In the DICE model, Eq. ( 7.1 ) is maximized subject to constraints ( 7.2 ), ( 7.3 ), ( 7.4 ), ( 7.5 ) and the other constraints mentioned (Nordhaus and Boyer 2000 ) . The constrained optimization problem is a straightforward nonlinear programming (NLP) problem. Once parameterized, the NLP can be solved numerically (analytic solution is impossible) using a computer software package, such as GAMS (McCarl et al. 2007 ) ; it might also be solved in Excel, although this requires add-on software because the standard solver in Excel cannot handle that many nonlinear constraints. 30 Many researchers have employed the DICE model in their own work. William Cline of the Institute for International Economics and Center for Global Development in Washington used the DICE-99 model to fi nd the world's optimal CO 2 -abatement strategy and the associated optimal path of carbon tax es. Relative to business-asusual (BAU) CO 2 emissions, the optimal strategy is to reduce emissions immediately by 35-40 % followed by further reductions to nearly 50% of BAU emissions by 2100 and to a peak of 63% by 2200, followed by a tapering off (Cline 2004 ) . The associated optimal carbon tax starts in 2000 at $35 (in 1990 dollars) per ton of CO 2 (t CO 2 ) rises to $46/tCO 2 in 2005, to $67/tCO 2 by 2025, to $100/tCO 2 by 2050, and to a peak of $355/tCO 2 in 2200 before declining. Cline ( 2004 ) also investigates the Kyoto Protocol (assuming it remains in place in perpetuity) and a value-at-risk 30 An updated description of the Nordhaus and Boyer ( 2000 ) versions of DICE and RICE can be found at http://www.econ.yale.edu/~nordhaus/homepage/dicemodels.htm (viewed March 3, 2010) . Both GAMS and spreadsheet versions of the models can be downloaded from this website. 31 The Copenhagen consensus referred to here should not be confused with the climate conference that was held in Copenhagen in late 2009. At the latter, nations failed to reach an agreement on reducing emissions of CO 2 and other greenhouse gases, which was generally considered a 'disaster' in the environmental community. scenario that identi fi es the maximum expected loss over the time horizon up to a probability of 95%; that is, the value-at-risk scenario determines the optimal carbon tax required to reduce the chance of a maximum possible loss to 5% or less. A summary of Cline`s results is provided in Table 7 .6 . The scenarios and analyses developed by Cline were used in the original 'Copenhagen Consensus ' project to rank the world's most pressing problems (Lomborg 2004 ) . The Copenhagen Consensus project, headed by the Danish environmentalist Bjørn Lomborg, consists of reports on the globe's most pressing problems and an assessment and ranking by a panel of experts that, in the 2004 project, consisted of eight top economists, including three Nobel laureates. Originally 32 challenges facing humankind were identi fi ed, but these were subsequently reduced to ten that warranted further investigation and became the subject of the Copenhagen Consensus project. Global warming was classi fi ed as one of the ten problems to be considered by the expert panel, but the panel ranked it last in terms of urgency and in terms of where governments should direct limited fi nancial and other resources. The panel's ranking is somewhat surprising because Cline 's analysis in Table 7 .6 is more along the lines of a later analysis by Nicholas Stern , which employs a low discount rate , assumes high damages, and recommends immediate action because the bene fi t-cost ratio from taking action is much greater than one. (The Stern analysis is discussed further below.) Yet, communicable diseases (especially HIV/AIDS), access to sanitation and clean water, government corruption, malnutrition and hunger, and trade barriers were considered greater problems whose solution yielded higher bene fi ts than attempts to mitigate climate change. In a follow-up to the 2004 Copenhagen Consensus that asked a number of experts to rank the world's biggest problems, Bjørn Lomborg edited a book that identi fi es 23 global issues and provides a cost-bene fi t analysis of various promising policy solutions (Lomborg 2007b ) . Readers are asked to make their own prioritizations. Then, in a second 'Consensus' project, Lomborg brings together experts in an effort to prioritize policy options for addressing global warming. The results are discussed in Sect. 7.4 . Nordhaus subsequently used an updated version of the DICE model to conclude that global society should make an effort to mitigate climate change by reducing CO 2 emissions relative to what they would otherwise be, but not stop warming entirely. Further, controls on emissions should ramp up over time. In particular, based on his later estimates and to prevent temperatures from rising more than 2.3°C, greenhouse gas emissions should be reduced by 15% in the current period (2010-2019) relative to what they would be without any action, by 25% of business as usual emissions in 2050, and by 45% in 2100. This implies that the optimal carbon tax (measured in real 2005 purchasing power US dollars) should rise from $9.50 per ton (t) of CO 2 ($35 per ton of carbon) in 2005 to about $25/t CO 2 in 2050 and $56/t CO 2 in 2100or 12¢ per gallon of gasoline in 2005 to nearly 70¢ per gallon by 2100 (Nordhaus 2007b (Nordhaus , 2008 . This optimal path for a carbon tax is predicated on unmitigated damages from climate change that amount to nearly 3% of global output in 2100 and 8% by 2200. Future damages from climate change are related in Nordhaus 's integrated assessment model to projected temperature increases via Eq. ( 7.3 ) that is calibrated to take into account estimates of damages and bene fi ts from global warming found in the literature. For example, agricultural economists had found that (some) warming was actually bene fi cial for agricultural production (Darwin et al. 1995 ; Mendelsohn et al. 2000 ; Mendelsohn et al. 1994 ; , 32 but such bene fi ts appear to be outweighed by losses elsewhere (Nordhaus 2007a ) . Three scenarios of projected damages from different calibrations of the power function used by Nordhaus are provided in Table 7 .7 . It is important to note that these are calibrations and not statistical evidence, so they really amount to nothing more than an assumed relation between temperature increase and economic damages that is based on projections of possible damages made by researchers examining speci fi c sectors such as agriculture. And each of these sectoral analyses has its own sometimes dubious assumptions regarding the relationship between projected climate change and damages, as discussed in Sect. 7.1 . Integrated assessment models are now fi nding that an optimal climate strategy needs to combine mitigation and adaptation (Prins et al. 2010 ) . For example, Bosello et al. ( 2010 ) link adaptation, mitigation and climate change damage in an integrated assessment model of the world economy and the energy and climate systems. They fi nd that an optimal combination of adaptation policies (reactive and anticipatory, plus investment in R&D) and mitigation would see no more than 20% of emissions abated over the period to 2100, while expenditures on adaptation would rise rapidly beginning in 2060. Depending on the discount rate and perceptions of future damage, the combination of mitigation and adaptation would account for between 44 and 73% of total damages (the remainder simply borne by the economy), while the proportion dealt with by adaptation would vary from a low of 20% (equal to that borne via mitigation) to 53% (with mitigation only addressing 9% of damages). In contrast to the approach used by Nordhaus ( 1994 Nordhaus ( , 2008 and Tol ( 2002 ) , which rely upon integrated assessment models , Goklany ( 2008 Goklany ( , 2009 ) measures the impacts of projected global warming on human risks, mortality and ecosystems using a bottom-up approach. Surprisingly, he is one of the few who begins with the IPCC's ( 2001 ) emission scenarios, which are the principal driver of climate models' projections of temperature increase (see also Tol 2005 ) . A brief description of four key scenarios is provided in the fi rst 11 rows of Table 7 .8 . The scenarios indicate the range of possible greenhouse gas emissions for different economic development trajectories (and include assumptions about technological change, land use changes and the energy mix) if nothing is done to mitigate climate change. The fi nal three rows summarize Goklany's ( 2009 ) estimates of the associated changes in mortality, changes in populations at risk due to water stress, and losses of coastal wetlands. The one thing to note about the assumed future emissions scenarios is the projected increase in per capita GDP (measured in 2005 US dollar equivalents); as shown in Chap. 4 , these are highly optimistic for all scenarios. Even the scenario leading to the lowest increase in income (scenario A2) and the highest increase in population would have those living in developing countries producing more than $16,000 per person, equivalent to standards currently existing in some eastern European countries. Two scenarios (A1F1 and B1) see those in developing countries with incomes equivalent to those in rich countries today, while those in rich countries will see a doubling of their real incomes. Suppose that there is no adaptation to global warming. Even so, Goklany fi nds that things will generally improve compared to a situation where there is no climate change and incomes remain at the level they were in 1990. Indeed, the negative impacts of climate change are offset by rising incomes, so much so that the overall climate impact is essentially negligible. Among scenarios, the greatest damages occur for the situation where people are poorest. Goklany ( 2009 ) also reports that net biome productivity will increase as a result of climate change and that less wildlife habitat will generally be converted to cropland as a result of global warming, a fi nding similar to that of Sohngen et al. ( 1999 ) . Finally, compared to mitigation efforts through emissions reductions, such as the Kyoto process, Goklany fi nds that targeted adaptation can yield large bene fi tsadaptation is an optimal policy response. Gary Yohe of Wesleyan University prepared the chapter on climate for a followup report to the Copenhagen Consensus ( Yohe 2007 ) . Again, the DICE model was used to obtain estimates of costs and bene fi ts. However, Yohe points out that nonmarket values of the damages avoided are not suf fi ciently taken into account in integrated assessment models . Therefore, in addition to the costs and bene fi ts (damages avoided) included in the integrated assessment model, he calculates the bene fi ts of mitigating global warming by examining the reduction by 2080 in the number of people at risk from hunger, water scarcity and coastal fl ooding . Results are provided in Table 7 .9 . These indicate fi rst off that, for all of the scenarios investigated, the DICE model predicts discounted costs exceeding bene fi tsthat net present value is negative. When the bene fi ts of reducing hunger, water scarcity and coastal fl ooding are included, the bene fi t-cost ratio is still below 1.0, except for one scenario, although none are less than 0.96. When other non-market bene fi ts, such as ecosystem services and biodiversity , are taken into account, argues Yohe, discounted bene fi ts will de fi nitely exceed discounted costs by a large amount. Thus, the bene fi t-cost ratios of taking action to avoid climate change, as presented in Table 7 .9 , must be considered an absolute lower bound. Notice that Yohe applies very low carbon tax es in all of his scenarios compared to Cline 's optimal taxes (see Table 7 .6 ). This is one reason why the net discounted bene fi ts (net present value) of mitigating climate change turn out to be negative, until one starts to add in non-market damages avoidance. He also uses a very low discount rate in one set of scenarios, which causes bene fi ts in 2080 to be more valuable today. This is why the net present value is much lower in scenarios 2 and 5, those with the low carbon taxes. For the second Copenhagen Consensus , Lomborg 2007b ) asks readers to make up their own mind as to how they spend money to address 23 of the world's biggest problems. However, if one looks at the bene fi t-cost analysis of mitigating climate change, it is clear that other problems are more pressing. In addition to not taking into account many non-market values, Yohe ( 2007 ) correctly points out that risks of tipping points, such as the collapse of the Atlantic Thermohaline Circulation (presumably the result of rapid Greenland glacial melt), are ignored in his analysis. The probability of such an event (or something similar) occurring as a result of human activities is extremely tiny, so that, even if the associated cost is extremely large, the discounted expected value is small but not ( 2007 ) Notes: a As indicated in Eq. (7.1), the DICE model employs a logarithmic utility function, which causes the effective real discount rate to fall over time insigni fi cant. In bene fi t-cost analysis, these costs can easily be accounted for and are unlikely to have a major impact on the overall conclusions, especially if one also uses probabilities to account for the possibility that human activities might only have a small impact on climate (see Chaps. 2 through 4 ). In addition to the idea of a policy ramp, economists almost unanimously favor market incentives and, in particular, a carbon tax that uses the proceeds to reduce income and other taxes -a revenue-neutral tax scheme. (Whether carbon taxes or cap-and-trade are a better way to deal with global warming is discussed further in Chap. 8 .) A carbon tax could theoretically lead to higher wellbeing as the economic distortions caused by other taxes would be reduced -the so-called 'doubledividend' of a green tax; it would also increase employment (see Bovenberg and Goulder 1996 ) . As the work of Nordhaus indicates, the optimal policy would be to impose a carbon tax set low to begin with and then slowly increased over time. One compelling reason for a tax is to avoid getting locked into an emission-reduction technology that might prove inferior to another option yet to be developed. For example, one might not want to lock into the internal combustion engine by promoting and subsidizing production of ethanol and biodiesel, with its production facilities and transportation networks, in case a much better option, such as an electric vehicle capable of going 300 km or more on a single charge, should come along. Doing so might be prohibitively expensive and militate against the very development of such an electric vehicle. 33 Two unrelated events changed the foregoing consensus among economists that the optimal tax should begin at a low rate and ramp up slowly over time. First was the publication of the Stern Report (Stern 2007 ) . 34 Contrary to all previous economic analyses (e.g., Kennedy 1999 Kennedy , 2002 Nordhaus 1994 ; van Kooten 2004 ) , the Stern Report fi nds that the bene fi ts of severely restricting CO 2 emissions today exceed the costs of doing so; there is no ramping up policy, only the conclusion that immediate severe restrictions on CO 2 emissions are warranted. The reasons are soon apparent, but they are rooted in the cost-bene fi t approach used in the Report, and particularly regarding the appropriate discount rate to apply in cost-bene fi t analysis . 33 An overview of the state of electric car s is found in The Economist (September 5, 2009, pp.75-77) . The main obstacle remains the battery, although new battery-automobile technologies are potentially capable of 200 km on a single charge (although current vehicles such as GM's Chevy Volt can only go about 60 km). Along with infrastructure that permits quick recharging or exchange of batteries, innovations in auto design to take advantage of electric motors, and economic and institutional innovations (e.g., separating ownership of batteries and vehicles), it could well be that the electric motor replaces the internal combustion engine for land transportation. 34 The report was prepared for the British government by civil servants under the guidance of Sir Nicholas Stern , a well known economist. Stern does not reject the notion of discounting, because it only makes sense when comparing viable alternatives with different fl ows of costs and bene fi ts over time, but he relies on a very low 1.4% rate of discount , which is determined as the rate of growth in per capita consumption plus 0.1% (Mendelsohn 2006 ) . This implies that distant damages (costs) of global warming are much more highly valued today than had heretofore been assumed (Nordhaus 2007 ) , thereby raising the discounted bene fi ts of acting today. Further, the Stern Report assumes damages from global warming to be three or more times higher than what has been previously assumed, while costs of mitigating CO 2 emissions are taken to be rather small (Mendelsohn 2006 ; Nordhaus 2007 ; Tol 2006 ) . But it is only when the non-market environmental damages from global warming are taken to be extremely large that an argument can be made for immediate drastic action to reduce CO 2 output. 35 Yet, the Stern Report did not immediately change the majority view of economists that society should wait before taking costly action on global warming. Rather, economists widely condemned it as "the greatest application of subjective uncertainty the world has ever seen" (Weitzman 2007 , p.718) , and an analysis that is not based on "solid science and economics" (Mendelsohn 2006 , p.46) and that "can therefore be dismissed as alarmist and incompetent" (Tol 2006 , p.980) . Finally, the Stern Report is alarmist. It attributes any and all potential future climate disasters solely to anthropogenic emissions of CO 2 . Thus, Stern argues it would be folly not to take action immediately to avert such a potential disaster; when a low discount rate is employed, the present value of extremely large damages occurring some distance into the future is also very large -thus, take immediate action. This is a theme to which we return shortly. The second event was the global fi nancial crisis that originated with U.S. fi nancial institutions, which had created a variety of suspect fi nancial derivatives that were overlooked or not well understood by most economists and investors. Unregulated fi nancial markets enabled institutions to sell fi nancial derivatives that consisted of very shaky loans (mainly high-risk mortgages that fi nanced 100% or more of the price of a home) combined with sounder assets, thereby hiding the true risks of the asset. In addition, insurance derivatives were created to insure the combined assets, and these insurance derivatives were also sold in fi nancial markets. When loans could not be repaid because house prices stagnated and then fell, the fi nancial derivatives unraveled and a credit crisis ensued. 35 Non-market values are dif fi cult to measure, and there has been quite a bit of controversy surrounding attempts to assign high values to such things as forest ecosystems, wildlife species , etc. In addition to the problem of budget constraints in the estimation of values (some studies fi nd that people are willing to pay more than their entire income to protect nature), there is much confusion about average versus marginal values. For example, an old-growth forest might have tremendous worth, but harvesting one more hectare of the forest might bene fi t society, as a single hectare might have little non-market value at the margin, much as the hundredth pair of shoes provided to an individual has no value to the person. See Chap. 6 . 7.2 Economic Modeling of Climate Change Damages The fi nancial crisis affected the real economy because people's wealth and earnings were adversely affected, and it shook the faith of many in the ability of markets to create desired outcomes leading instead to a renewed interest in regulation . This might explain why Jeffery Sachs of Columbia University even praised President Obama for favoring regulation in addressing climate change: "Obama is already setting a new historic course by reorienting the economy from private consumption to public investments. … Free-market pundits bemoan the evident intention of Obama and team to 'tell us what kind of car to drive.' Yet that is exactly what they intend to do … and rightly so. Free-market ideology is an anachronism in an era of climate change." The backlash was so severe that Nobel Laureate Robert Lucas of the University of Chicago felt compelled to write an article for The Economist (8 August 2009, p.67) defending the 'dismal science' and markets in particular. A very different approach to that of Stern ( 2007 ) is taken by Martin Weitzman , who fi rst criticized the Stern Report for its highly speculative nature but then set about to provide an alternative defense for taking immediate action on global warming. His approach is not based on low discount rate s and optimistic estimates of mitigation costs (Weitzman 2009a, b, c ) , but is still rooted in cost-bene fi t analysis . Weitzman considers what happens when there is a high probability of a catastrophic event. Weitzman bases his case on 'fat-tailed' probability density functions that, using his methods (discussed below), provide a 5% probability that average global temperatures rise by more than 10°C and a 1% probability that they increase by more than 20°C. What would be the implications of 10-20°C warming? "At a minimum such temperatures would trigger mass species extinctions and biosphere ecosystem disintegration matching or exceeding the immense planetary die-offs associated in Earth's history with a handful of previous geoenvironmental mega-catastrophes " (Weitzman 2009a , p.5) . Thus, Weitzman begins with the view that anthropogenic global warming is not only occurring, but that its implications are catastrophic. The attributing cause of the current catastrophe is the result, according to Weitzman , of the product of unprecedented greenhouse gas emissions and a critical, climate sensitivity parameter s* = s 2×CO2 that converts atmospheric CO 2 into temperature increases. This was discussed in Chap. 4 in relation to Eq. ( 4.20 ) . Recall that the climate sensitivity parameter s* was determined to be equal to 1.2°C, but that climate models project much higher temperature increases as a result of two feedbacks that involve (1) water vapor , clouds and ice-albedo (denoted f 1 ), and (2) a potentially catastrophic secondary release of greenhouse gases (including CO 2 ) attributable to the initial warming (denoted f 2 ). This led to Eq. ( 4.21 ) reproduced here: The values of the parameters discussed in Chap. 4 were as follows: s* = 1.5°C to s* = 5.5°C, which are taken from climate models rather than based on the historical value s* = 1.2°C; f 1 = 0.20 to f 1 = 0.73; and f 2 = 0.042 to f 2 = 0.067. Weitzman argues that the subsequent scaling multiplier, m s s f f , is highly uncertain, so much so that its probability distribution is necessarily characterized by 'fat tails' that bring about high probabilities of large increases in temperature. How does Weitzman come to this conclusion? He bases this on four exhibits (Weitzman 2009b, c ) : 1. According to Antarctic ice core data reported by Dieter et al. (2008) , current atmospheric concentrations of CO 2 are the highest ever recorded in the past perhaps 850,000 years, and the current rate of increase in atmospheric CO 2 is historically unprecedented. This unprecedented increase can only be attributed to human causes according to Weitzman and others. 2. There are 22 studies reported in Table 9 .3 and Box 10.2 of the IPCC's Fourth Assessment Report (IPCC WGI 2007 , pp.721-722, pp.798-799) . These studies report probability density functions (PDFs) with high probabilities of large temperature rise. Weitzman sums the reported PDFs into a single PDF using what he calls a meta-analysis based on Bayesian model averaging. From the meta-derived single PDF, Weitzman fi nds that the probability that the temperature increase exceeds 7°C is 5%, or that P( s* ³ 7°C) = P( D T ³ 7°C) = 0.05, and P( D T ³ 10°C) = 0.01. This is the feedback effect of CO 2 warming on water vapor discussed earlier. 3. Next, he assumes that the higher temperatures brought about by increased concentrations of atmospheric CO 2 will cause permafrost and boggy soils to release methane , thereby amplifying global warming beyond even the water vapor feedback. The possibility that this feedback effect takes place is discussed by Scheffer et al. ( 2006 ) , Matthews and Keith ( 2007 ) ), and The Economist (1 August 2009, p.70) . 36 The possibility of such a feedback effect leads Weitzman to increase the value of the climate scaling multiplier s m so that, based on information from Torn and Harte ( 2006 ) , the probability that temperatures could rise above 11.5°C is 5% and that they could rise above 22.6°C is 1%, or P( s m ³ 11.5°C) = P( D T ³ 11.5°C) = 5% 36 These feedbacks ignore others in the climate system. Without a natural greenhouse gas effect, the Earth's surface temperature would be about 18°C; with it, but without any feedbacks, it would be about 60°C; with feedbacks, it is about 15°C. In the natural system, then, feedbacks eliminate 68% of GHG warming -that is, negative feedbacks outweigh positive ones. But the climate models used by the IPCC assume that positive feedbacks outweigh the negative ones -precisely the opposite of what is found in nature. The result of Spencer 's and Lindzen's and others' studies (noted in Chap. 4) is generally that climate sensitivity s 2×CO2 is about 0.5°C instead of the IPCC's midrange of 3.0°C. That, in turn, greatly diminishes the probabilities of 10 and 20°C warming. and P( s m ³ 22.6°C) = P( D T ³ 22.6°C) = 1%. However, recognizing the crude and speculative nature of his calculations, Weitzman rounds this down: there is a 5% probability that the expected increase in temperature exceeds 10°C and a 1% probability that it exceeds 20°C -that is, P( D T ³ 10°C) = 5% and P( D T ³ 20°C) = 1%. 4. Finally, given the potential for huge increases in temperature, Weitzman argues that economic damage (utility) functions parameterized on the basis of current fl uctuations in temperature make no sense. Recall that William Nordhaus uses a quadratic damage function: D ( T ) = bT + cT 2 , where T is temperature as before, and a particular parameterization is given in Table 7 .7 . But Weitzman argues that it should be exponential, so that D ( T ) = e f(T ) , where f ( T ) = bT + cT 2 , or some other function. Clearly, the exponential damage function results in much higher damages the farther into the future one projects rising temperatures. Based on these four points, all of which are highly speculative, Weitzman concludes that there is a real possibility that, regardless of the discount rate , the damages from climate change could be in fi nite -that humans could cease to exist as a species. Integrated assessment models ignore this possibility and, by failing to take it into account, any policy path is not truly optimal. Weitzman 's analysis also entails a methodological issue. The so-called probabilities provided by the 22 studies reported by the IPCC WGI ( 2007 , pp.798-799) are based solely on computer models, beginning with those that develop the emission scenarios and then followed by the climate models that project the associated future climates (see Chap. 4 ). These are not probabilities in the classical sense -based on repeated observations, as in the case of a fair coin toss yielding a 50% probability that the coin comes up as a 'tail,' or the probability that a driver involved in an accident has no valid driver's license. Weitzman's exercise is nothing more than a means for specifying a prior belief (in the Bayesian sense) that there is a high probability that anthropogenic emissions of CO 2 will trigger dangerously high changes in temperature. 37 Further, as noted in Chap. 6 , his 'fat-tail' probability distribution and exponential damage function are an attempt to place a precautionary type principle in a cost-bene fi t framework. In some sense, this line of argument is similar to that of Brander and Taylor ( 1998 ) , who use an economic model to argue that the advanced civilization on Easter Island disappeared because people simply "ate up their natural endowment," and then speculate that we are doing the same thing on a global scale. 38 This is simply a more modern expression of Malthus 's original argument that population growth outpaces growth in food production, thereby leading to inevitable misery for the human race. Needless to say, if one accepts Weitzman 's premises, he makes a reasonable case that something should be done to prevent global warming, invoking something akin to a 'generalized precautionary principle ' (see Chap. 6 ). But he then introduces further value judgments by arguing that "the handful of other conceivable environmental catastrophes are not nearly as critical as climate change" (2009a, p.14), comparing global warming only against genetically modi fi ed organisms ('Frankenfoods') and "the possibility of a large asteroid hitting Earth" (2009a, p.14; see also Weitzman 2009b, c ) . 39 Clearly, there exist a lot of other threats to Earth besides these two, including ones perpetrated by humans (such as nuclear holocaust), and deciding which constitutes the worst threat is not an easy task without a lot more information and a crystal ball. Yet, despite his conclusions, Weitzman backs away from advocating immediate action to stop CO 2 emissions entirely. He comes to this conclusion partly because his results imply, in the end, that the chance of catastrophe, and thus the optimal insurance policy required to offset it, is quite arbitrary. Thus, he advocates spending on a 'put-a-man-on-the-moon' type of research and development (R&D) program that will lead to a technological solution that will enable humankind to control the climate. This is discussed further in the Sect. 7.3 . Here we simply conclude that Weitzman 's economics hinges crucially on two points: (1) human activities contribute to the observed increase in atmospheric CO 2 and humans can devise means to control the level of CO 2 in the atmosphere; and (2) increased atmospheric CO 2 leads to increased global temperatures via an extremely high climate sensitivity parameter ( s m ). If either of these suppositions is false, or even if one of them is only partially true, then the economic conclusions disappear. With the aid of integrated assessment models , researchers are able to identify a policy ramp -an escalating carbon tax or an increasingly stringent cap on CO 2 emissions. The policy ramp is smooth but it is only pseudo-optimal, because the key structural parameters and future trends (of population, technology, etc.) need to be speci fi ed a priori, once and for all within the integrated assessment model. Using a Bayesian learning approach, Andrew Leach ( 2007 ) tested the policy paths derived from the IAMs when there is uncertainty about some of the structural parameters in the model. He found that uncertainty in even one or two structural parameters was suf fi cient to delay the identi fi cation of an expected optimal policy regime for a century or more. That is, in the context of climate change, it takes upwards of hundreds of years to be able to determine an optimal policy strategy because it takes that long to obtain suf fi cient information about the future damages of climate change and other economic relationships to satisfactorily resolve uncertainty. The solutions offered by Stern and Weitzman attempt to address this uncertainty in the framework of cost-bene fi t analysis by making various assumptions about the levels of damages and costs and the discount rate (Stern) , or the probability of a catastrophic loss and the possibility of insuring against it (Weitzman) . Both require a very large present outlay to mitigate climate change (Stern) or fully insure against the potential damages (Weitzman) . It is unlikely that this type of outlay will be politically acceptable, even in rich countries (see Chap. 8 ), let alone developing ones. McKitrick ( 2011 ) offers a solution that is politically acceptable to both those who feel climate change will lead to a climate catastrophe and those who are unconcerned either because they do not think warming will lead to catastrophe or they disagree with the premise of anthropogenic climate change. The innovation introduced by McKitrick is that he ignores abatement costs and, thus, does not seek to demonstrate that discounted bene fi ts of taking action exceed discounted costs. He is only interested in setting the correct (optimal) price on carbon emissions. McKitrick derives an optimal carbon tax by minimizes the discounted present value of damages subject to the effect that carbon emissions have on a suitable state variable, namely, temperature. The state-contingent pricing rule that he derives is the following: where t t is an approximation of the optimal tax to be set at time t . The tax is a function of the marginal damage rate g , the current level of emissions e t , the moving average of emissions over k periods, denoted by t e , where k is the number of periods required for CO 2 to leave the atmosphere (or the half life of CO 2 residency in the atmosphere), and s ( t ) is the value of the state variable (say, temperature). The actual derivation of the optimal path of the tax involves assuming the discount rate is zero, which implies that the rule ( 7.7 ) is conservative if temperature (the state variable) is rising. Tax rule ( 7.7 ) is not a prescription for a policy path, but only a rule that links the tax rate to the state of the environment. The only obstacles to implementing the optimal tax are information about the marginal damage rate and the period k for calculating the moving average of emissions. To determine the former, we assume that a tax of $25 per ton of CO 2 was optimal in 2005. Using global fossil fuel emissions data from Oak Ridge National Laboratory in Tennessee, 40 we fi nd that 29,227 Mt of CO 2 were emitted globally in 2005, with average emissions over the preceding 50 years equal to 17,835 Mt of CO 2 ( k is assumed to equal 50). The HadCRUT3 temperature anomaly for 2005 was 0.482°C. Solving ( 7.7 ) using these values gives g = 31.65. We then use the value of g , information on emissions of CO 2 going back to 1801, and the HadCRUT3 temperature anomaly as a state variable to calculate the optimal tax rate that should have been imposed going back to 1850. The optimal tax path is given in Fig. 7 .6 , where negative tax rates have been assigned a value of zero. Notice that the tax rate exceeds $25/tCO 2 on only one occasion, namely, in 1998 when there was a particularly strong El Niño event. As indicated in Fig. 7 .6 , if global temperatures rise rapidly, the tax rate will also rise rapidly. In the fi gure, an average annual global temperature is used as the state variable, but a monthly average could also be employed. The monthly average will surely be more volatile than the annual temperature, which is much more volatile than a 3-or 5-year moving average. The reason that costs are ignored in setting the tax is that the tax acts as a signal to emitters of carbon dioxide . The market participants will use the information from 40 Data available at http://cdiac.esd.ornl.gov/trends/emis/em_cont.html (viewed July 22, 2010) and compiled by Tom Boden, Gregg Marland and Bob Andres. tax trends to make decisions concerning the credibility of the IPCC's (and others') forecasts of climate change. The market will decide on the credibility of the science, because those who decide wrongly (say, by investing heavily in emission reduction equipment) incur costs that make them relatively less competitive. Rather than rely on political or scienti fi c pronouncements, investors will use the market -the trend in tax rates -to guide their decisions, much like commodity and other prices that fl uctuate signi fi cantly over time currently guide decisions. Tax rule ( 7.7 ) should be acceptable to many more people than the alternative options being proposed by climate scientists and legislators (see Chap. 8 ). The tax rate appeals to those who fear catastrophic global warming because the tax will escalate rapidly with rising temperatures. It also appeals to those who do not believe in catastrophic anthropogenic climate change because, if their view is correct, the tax will either rise very slowly or not at all, or even fall to zero. Thus, while a majority of citizens are unlikely to support actions that drastically increase energy costs, a majority will be likely to support a tax rule such as ( 7.7 ). A signi fi cant number of economists and policy analysts predicted that the Kyoto Process would fail, because it hopes to achieve greenhouse gas emission-reduction objectives that cannot possibly be attained. 41 The problem is that 80% of the world's peoples live on $10 per day or less, and 1.5 billion people currently have no access to electricity ) -climate policies that prevent economic development will certainly be objectionable to them. Further, as discussed in Chap. 10 , non-OECD countries are projected by the OECD and International Energy Agency to account for 93% of the increase in global energy demand between 2007 and 2030, and this will be driven largely by economic growth in China and India . Growth in emissions resulting from the increased consumption of coal by China, India and other Asian countries, let alone growth in consumption of oil and gas, will exceed any possible reduction in emissions that OECD countries could implement. The only conclusions that a realistic observer could possibly come to are that (1) energy prices are 41 The current author also recognized the inability of the Kyoto Process to achieve its objectives, which are modest compared to those that European Union and United States are currently crafting (see Chap. 8 ). This is evident from the sub-title of the author's climate economics book (van Kooten 2004 ) : Why International Accords Fail . For example, Canada's position as it approached the Kyoto discussions was not to give ground on Canadian emissions, because earlier meetings between the federal and provincial ministers of the environment had concluded Canada was in no position to reduce emissions. Yet, during late-night negotiations, Canada abruptly agreed to a 6% reduction from 1990 emission levels by the 2008-2012 commitment period, while knowing full well it could not possibly achieve any reductions whatsoever. Subsequent growth in Canadian emissions has borne this out. This is discussed further in van Kooten ( 2004 ) . currently too high as too many of the earth's citizens are unable to afford to purchase the energy they need to attain even modest standards of living, and (2) addressing climate change by targeting CO 2 emissions is a futile project. It would appear that current climate policies are now dead in the water as a result of the failure of COP15 at Copenhagen in December 2009 (and COP16 in Cancun , Mexico in December 2010 , and the release of the climategate emails in November 2009. As a result, Prins et al. ( 2010 ) , , Levitt and Dubner ( 2009 ) , and many others are making a strong case that a different approach is required. So where does this begin and what form does it take? The climate agenda as found in the Kyoto process, for example, is based on what Prins et al. ( 2010 ) describe as a 'de fi cit model' of science: The scienti fi c expert provides the ignorant public and its representatives with the requisite knowledge to remedy their de fi cit. The public implicitly trusts the superior knowledge and quali fi cations of the scientists and thereby allows scientists to set forth the actions needed to solve the problem. This model of science works well when the problem is straightforward, such as forecasting where a hurricane might make landfall and instructing people in the path of the hurricane to get out of the way. It works when the valve in someone's heart stops functioning and the prescribed action is to replace the valve with an arti fi cial one. It fails in the case of weapons of mass destruction, for example, not because the experts are unable to destroy the ability of a rogue nation from building nuclear weapons, but because doing so involves value judgments. The case of climate change is more like the latter example, where knowledge that a country has dangerous weapons does not constitute scienti fi c grounds for concluding that the country will deploy them. The reason has to do with wicked uncertainty. Wicked uncertainty occurs when the problem is too complex and/or too uncertain to resolve by focusing on a single object and the outcomes from taking action, including doing nothing, are unknowable. Climate change is not a conventional environmental problem that can simply be solved by reducing CO 2 emissions! As Prins et al. ( 2010 ) point out, it is a problem of economic development, population, technological progress, income differentials, urban planning, agriculture and forestry, lifestyles, and much more. It is, among other things, an economic, energy, development and land-use problem. Given the failure of policies to spur economic growth in poor countries, how can we expect an easy global fi x to the climate problem? The Kyoto -IPCC process is a failure because it relied on the de fi cit model -climate scientists de fi ning the problem and then recommending political solutions to solve it. Recent efforts by social scientists, economists and others are now moving away from the naïve approach that still seems to dominate policymaking (see Chap. 8 ). The new approach is still evolving, but the focus is holistic rather than single minded. It takes the view that policies should not be implemented to punish people, as this considers emitting CO 2 to be a sin, but, rather, that policies should be attractive to people because the policies will provide immediate bene fi ts. Two elements of this approach can be identi fi ed. First, there are things which ought to be done regardless of their impact on climate change mitigation, although mitigation is an indirect bene fi t. One of these is to reduce air pollution -black carbon (soot), which comes from the burning of diesel fuel, cooking stoves (many people still rely on wood stoves for cooking), forest fi res, and so on. Soot is thought to be responsible for nearly half of the ice melt in Arctic regions, for example, because the soot particles land on ice and absorb the sun's energy, thereby causing the ice to melt (Prins et al. 2010 ) . Likewise, non-CO 2 greenhouse gases make a signi fi cant contribution to anthropogenic warming . By reducing hydro fl uorocarbons (HFCs), per fl uorocarbons (PFCs), ozone , methane and so on, health bene fi ts can be realized, as well as mitigation bene fi ts. Land use changes also impact the climate. While tropical deforestation is perhaps the best known example (because it releases CO 2 into the atmosphere as trees are usually burned as land is converted to agriculture), other land use changes can also have a large impact on local and global climates. For example, more of the sun's energy is absorbed as land is paved or converted to development. Planting trees can alleviate some of the adverse temperature and even moisture impacts, while the use of alternative water-permeable surface materials (e.g., clay-gravel driveways as opposed to concrete or pavement) might prevent future fl ash fl oods. Agricultural programs and subsidies that promote cropping, including policies to increase production of biofuels, have an adverse impact on forest and grasslands, thereby reducing the carbon stored in terrestrial ecosystems (see Chap. 9 ) and the ecosystem bene fi ts that are provided. The social bene fi ts of retaining lands in their 'natural' state often exceed the costs of converting them to the subsidized use. Second, cap and trade cannot succeed as the EU's Emissions Trading System (ETS ) has proven (e.g., see Prins et al. 2010 ) . As discussed in Chap. 8 , cap-andtrade schemes are likely less desirable than taxes as a policy instrument in the case of climate change. Further, the way in which they have been implemented (including the EU ETS) is not true cap and trade; schemes permit the use of 'outside' credits (CDM , terrestrial offsets, etc.), which leads to corruption. Nor can a carbon tax do the job, partly because of political acceptability issues. Both taxes and credit trading will make energy too expensive if either is implemented on a global scale. If implemented only in some rich countries, prices for fossil fuels will fall elsewhere encouraging greater consumption and reducing the bene fi ts associated with the original tax or emissions trading policy -that is, a leakage will occur that is likely to be quite large. Yet, a global tax is not what is needed as the majority of the globe's citizens need more energy not less and fossil fuels are already too expensive. What can be done? We need to come up with cheaper, non-carbon energy sources. The costs of alternatives to fossil fuels must be cheaper than the cost of using coal; otherwise, 80% of the world's population will have the incentive to use coal-fi red energy. Nuclear power is one possibility. While there are likely other energy options, the task of determining these is onerous. What is required is a research effort similar to that of putting a man on the moon, as advocated by Weitzman . However, climate policy should not scare people into reducing fossil-fuel energy through massive sin taxes, or cap and trade (which amounts to the same thing); nor should society provide massive subsidies for alternative fuels, such as biofuels that may increase rather than reduce greenhouse gas emissions (see Chap. 10 ), which might lock us into undesirable technologies. Rather, Prins et al. ( 2010 ) recommend a carbon tax that is set at a low rate, suf fi cient to fund an R&D project of the type required, but not so high that it results in adverse or unanticipated consequences, ones we might later regret. A focus on research and development, and demonstration and adoption of the new technologies that arise is important because it is the only way to de-carbonize economies. This is illustrated by the simple mathematical relation between economic development and energy use. Thinking about a new approach to climate policy might begin with something that is similar to the well known macroeconomic income identity, where income equals the sum of consumption, investment, government expenditure and net exports. The energy-equivalent relation is known as the Kaya identity 42 : where C refers to carbon emissions (measured in terms of CO 2 ), N is population, Y is gross domestic product (GDP), and E is total energy consumption or use. This identity can be applied to the globe, a nation or a region. The fi rst term on the right hand side of the identity is population, the second term is per capita GDP, the third term is the energy intensity of the economy and the fi nal term is the carbon intensity of energy . An indirect approach to climate mitigation is to reduce the energy intensity of economies and the carbon intensity of energy. According to the Kaya identity ( 7.8 ), there are only a limited number of ways to reduce emissions of carbon dioxide : Manage population; • Limit the generation of wealth (reduce GDP); • Generate the same or a higher level of GDP with less energy; • Generate energy with less CO • 2 emissions; or Some combination of the fi rst four factors. • Dramatically reducing population is something that is outside the policy envelop -it is simply not acceptable, although Paul Erlich, James Lovelock, Peter Singer and others have advocated dramatic reductions in population to forestall climate change and other environmental disasters that these writers attribute to humans. 43 Further, climate policies must not cost too much, or they must be done in a way that leads to economic growth. If they cost too much or prevent economic growth, particularly the economic development of poor countries, they will simply not be politically acceptable. To examine the remaining options, one can rewrite the Kaya identity as: where technology ( C / Y ) is simply the ratio of CO 2 emissions to GDP. In 2006, 29.12 Gt CO 2 were emitted globally while global GDP amounted to $47.267 trillion, so that the technology or emissions to GDP ratio was 0.62 tCO 2 per $1000 GDP. From 1980 to 2006, the world's C / Y ratio fell from 0.92 to 0.62 tCO 2 per $1,000 GDP. This is seen in Fig. 7 .7 where the emissions intensities of selected countries are also provided. In making these calculations, a measure of purchasing power parity (PPP) GDP is used. The carbon intensity of an economy depends on the GDP value that one employs. In Fig. 7 .7a , we provide PPP GDP in constant 1990 Geary Khamis dollars (GK$), but in Figs. 7.7b, c , we employ PPP GDP measured in constant 2000 US$. 44 Notice that, for the United Kingdom, the GK$ measure has carbon intensity falling from 0.85 in 1980 to 0.42 in 2006, while carbon intensity falls from 0.63 to 0.31 in US$ terms. In both cases, however, the proportional decline in carbon intensity is the same. The global carbon intensity has been falling at a rate of about 0.012 tCO 2 per year. This average rate of decline is about the same for rich and poor countries, but they vary considerably from one country to another. For example, rich countries that have recently shed aluminum production (Japan) or replaced coal-fi red power plants with nuclear ones (most notably France) have experienced faster rates of improvement in carbon intensity than other countries. Once the most carbon-intensive industries have been moved offshore and the least costly transitions to 'green' power have been implemented, it becomes increasingly dif fi cult for a country to increase the rate at which carbon intensity of the economy declines. Indeed, many rich countries have reduced their domestic 43 Possibly the most depressing book on the subject is by David Benatar ( 2006 ) , who ultimately concludes that a human population of zero is ideal. He says he would commit suicide except that he is needed to make others aware of the harm that humans cause to the earth. See Wanliss ( 2010 ) for an alternative perspective. 44 PPP adjusts country-level GDP for cost of living rather than using current exchange rates. GK$ is explained at: http://en.wikipedia.org/wiki/Geary%E2%80%93Khamis_dollar (viewed May 16, 2011). The data for Fig. 7 CO 2 emissions since 1990 by shifting production of consumer goods to developing nations (Peters et al. 2011 ) . As a result, since 1990 the CO 2 emissions embodied in goods exported from poor to rich countries increased from 400 million tons to 1.6 billion tons, or by 8% per year. If we consider the so-called BRIC countries (Brazil , Russia, India and China ), we fi nd that, while China and Russia have improved upon their carbon emissions 4 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 0 Brazil and India are likely to see emissions per $1000 GDP rise before they fall because their current C / Y is much lower than that of even rich countries that are characterized by large service sectors. Hence, one might expect an intensi fi cation of emissions as a result of rapid industrialization (including construction of infrastructure) that generally accompanies economic growth, as witnessed in China. The United Kingdom has perhaps the most draconian climate legislation of any government: climate legislation passed in December 2008 requires the UK to reduce greenhouse gas emissions by 34% by 2022. The UK also has one of the lowest C / Y ratios (using the 1990 GK$ measure), with C / Y = 0.42 in 2006; but France has the lowest carbon intensity index among rich countries, with C / Y = 0.30 in 2006. The reason for the low rates in these countries is in large part due to their success in moving manufacturing offshore and, in the case of France, heavy reliance on nuclear energy. France took 20 years to move from C / Y = 0.42 to C / Y = 0.30, but not as a result of a concerted effort to reduce CO 2 emissions. Roger Pielke Jr. ( 2009 estimated that, to meet its climate policy targets, the UK will need to get to C / Y = 0.30 in 5 years. This would require, for example, the immediate construction of 40 nuclear power plants, each with a capacity of 1,100 megawatts (MW). The reason why it is unrealistic to achieve stabilization of atmospheric CO 2 at 450 ppmv, or any other lower target, is that developing countries are going to increase their emissions of CO 2 rapidly over the next 50 years. China and India will account for more emissions of CO 2 than Europe and North America, and growth in emissions in these countries will swamp anything developed countries will do to reduce their own emissions. For example, the incremental increase in Chinese CO 2 emissions during 2½ months equals the total of Canada's annual emissions. The enormity of the task is further illustrated in Table 7 .10 by examining what is needed to offset 10% of Canada's CO 2 emissions. It would take about 20 new nuclear power plants or 20 large-scale hydroelectric dams, or huge investments in solar and/or wind farms. None of these are likely to be constructed in the near ( 2010 ) Notes: a This is the size of the Site C facility proposed on the Peace River in northeastern British Columbia, a project that is unlikely to go ahead as a result of environmental, local, native and other lobby groups future, at least not on this scale. The reason has to do with environmental and local opposition to any of these options, including opposition to the construction of transmission facilities that might be needed if electricity is generated in more remote locations and needs to be transmitted to developed areas. Inevitably some residents will be affected and thus mount campaigns opposing construction, which will require years of negotiation and litigation to resolve. The past century witnessed a tremendous reduction in mortality because of improvements in general health, and that was accompanied by large population increases as life expectancy increased and infant mortality declined. At the same time, air and water quality in the developed countries improved signi fi cantly as citizens demanded environmental improvements. These improvements are the direct result of rising per capita incomes. As we saw in Chap. 4 (Sect. 4.1 ), incomes in developing countries are projected by the IPCC to increase substantially over the next 50-90 years, so much so that poverty will effectively be eliminated. This implies that even the poorest countries will have the resources needed to adapt quite easily to climate change. The higher per capita incomes assumed to be associated with projected climate change also makes it dif fi cult to justify mitigation. Consider the with-and-without effects of mitigation. Because energy is the most important driver of development, reducing the ability of poor people to access cheap energy serves only to keep them poor. Mitigation policies slow the economic growth of developing (and developed) countries, but will prevent the increased mortality associated with climate change. Without mitigation, on the other hand, real per capita incomes of poor people will rise to such an extent that poverty is eliminated. Based on historical evidence, this will lead to large reductions in mortality from almost all causes. As indicated in this chapter, because the Earth's poorest are projected to have high per capita incomes in the absence of action to mitigate climate change, the overall bene fi ts of allowing climate change to occur might well exceed those of taking action and keeping countries in poverty. This provides one explanation why developing countries are not keen on taking action to reduce carbon dioxide emissions if this in any way slows economic growth. Another possible explanation relates to the potentially fl awed focus on carbon dioxide. In their book Super Freakonomics , Steven Levitt and Stephen Dubner argue that the focus of climate change mitigation should not be on carbon dioxide . They argue that, if the objective is to mitigate global warming, then this should be done in the socially most ef fi cient manner -it should be done at least cost -and that might not entail controls on CO 2 emissions. After all, carbon dioxide is necessary to plant growth and increasing atmospheric CO 2 has helped drive the green revolution (as noted in section 7.1 above). One low-cost solution to the problem of global warming has been suggested by the Dutch Nobel Prize winning chemist, Paul Crutzen. He argues that, since it is dif fi cult to get people to reduce greenhouse gas emissions suf fi ciently to mitigate global warming, it is simpler to inject sulfur into the stratosphere, as this could rapidly reduce temperatures (Crutzen 2006 ) . Levitt and Dubner ( 2009 ) point out that it is potentially possible to spray sulfur dioxide into the stratosphere using a specialized 'hose' in the sense of a garden hose and suggest that such a device could be located near Fort McMurray, Alberta -Canada's oil sands where sulfur and energy are readily available and the location is suited to having SO 2 circulate throughout Earth's entire upper atmosphere, thereby reducing warming. The costs would be several hundred million dollars annually compared to the trillions of dollars to achieve the same mitigation bene fi ts from reduced CO 2 emissions, while the negative impact of SO 2 would be relatively small. There are other geo-engineering solutions that permit us to continue using the Earth's plentiful fossil fuels and emitting greenhouse gases. These do not focus on carbon dioxide but on other factors that affect climate. Indeed, Martin Weitzman 's assumption that a technological solution exists was highly in fl uenced by Scott Barrett' s argument that there are economically inexpensive, geo-engineering solutions to the problem of global warming (Barrett 2008 (Barrett , 2009 . David Keith of the University of Calgary is an enthusiastic proponent of an engineered climate who is cited by Barrett, Weitzman and others. In personal conversations, he assures listeners that one day, when the planet gets hot enough, we will simply use carbon capture and storage technologies to take CO 2 out of the atmosphere on a large scale and thereby cool the globe; alternatively, we can release CO 2 to warm the globe. Finally, recall that Prins et al. ( 2010 ) advocated a small carbon tax that would be used to fund research and development into solutions to the problem of global warming. R&D was also the favored option at the most recent Copenhagen Consensus , which focused solely on climate change. Five economists, including three Nobel laureates, were asked to rank 15 options for addressing climate change. These are listed in Table 7 .11 along with the economists's individual rankings, where we use a 15-point scale with the individual's best option given a score of 15 and the lowest ranked alternative a score of 1. The overall ranking of the alternatives is provided in the fi nal column, and differs slightly from that presented in Lomborg ( 2010 , pp.381-382) . Included in the table is the type of solution each of the options represents -whether reliance on engineering, R&D, technology transfer, adaptation, terrestrial carbon sinks, or cutting emissions of anthropogenic emissions of carbon dioxide or methane . Interestingly, the high-pro fi le panel of economists consistently ranked reduction of greenhouse gas emissions at or near the bottom, with carbon tax es and, thereby, emissions trading as the worst possible of all options for addressing climate change. Adaptation was ranked 5th, only behind research and development into cloud whitening, energy (see Chaps. 10 and 11 below), carbon storage and stratospheric aerosol insertion. Adaptation was ranked ahead of the approach that is advocated by environmentalists and seriously considered by policymakers, namely carbon taxes or emissions trading. Ranked in the middle were two other technology options (including transfer of technology), forest ecosystem sinks (discussed further in Chap. 9 ), reduction in methane emissions as methane is a potent greenhouse gas, and means for reducing black carbon output from stoves in poor nations and diesel vehicles. With the exception of research into air capture, the other options were considered to be fair or poor. Clearly, decision makers need to reconsider their focus when it comes to climate change. Despite efforts by environmentalists to impose controls on fossil fuel emissions and, seemingly, to implement global tax and/or emission trading schemes, if global warming is truly the problem that global society needs to solve (and it might not be), it is necessary to adopt policies that are most effective in addressing climate change. The most effective policies may not be ones that seek to reduce fossil fuel emissions. The sea-level conundrum: Case studies from palaeo-archives Global climate change and agriculture: An economic perspective Global climate change and US agriculture The forest and agricultural sector optimization model (FASOM): Model structure and policy applications Some Like it Warm, First Things Polar bear population forecasts: A publicpolicy forecasting audit The implications of climate change for agriculture in the prairie provinces Canadian Climate Centre CCD88-01 Potential effects of climate change on agriculture in the prairie region on Canada Climate change impacts on agribusiness sectors of a prairie economy The incredible economics of geoengineering The coming global climate-technology revolution Residues of organochlorine pesticides, polychlorinated biphenyls, and mercury and autopsy data for bald eagles Better not to have been. The harm of coming into existence Telegraph . At viewed March Climate policy and the optimal balance between mitigation, adaptation and unavoided damage Have disaster losses increased due to anthropogenic climate change? Bulletin of the Optimal environmental taxation in the presence of other taxes: General-equilibrium analysis The simple economics of Easter Island: A Ricardo-Malthus model of renewable resource use A production function approach to the GDP-temperature relationship Changes in precipitation extremes in the Hawaiian Islands in a warming climate A 20th century acceleration in global sea-level rise Climate change Global warming and agriculture: Impact estimates by country Causes of bald eagle mortality Residues of organochlorine pesticides and polychlorinated biphenyls and autopsy data for bald eagles Albedo enhancement by stratospheric sulfur injections: A contribution to resolve a policy dilemma A 450 year record of Spring-Summer fl ood layers in annually laminated sediments from Lake Ammersee (Southern Germany) World agriculture and climate change: Economic adaptations A tropical cyclone data tape for the eastern and central North Paci fi c Basins, 1949-1983: Contents, limitations, and uses (NOAA Technical Memorandum NWS NHC 25, 17 pp) Guns, germs and steel High-resolution carbon dioxide concentration record 650,000-800,000 years before present Was there a basis for anticipation the 2010 Russian heat wave? The weather factor. How nature has changed history Climate catastrophe: A superstorm for global warming research The little ice age. How climate made history 1300-1850 The great warming. Climate change and the rise and fall of civilizations DDT and DDE: Sources of exposure and how to avoid them (NAES #52031334, 6 pp) Climate change and global malaria recession Death and death rates due to extreme weather events. Global and U.S. trends What to do about climate change Is climate change the 'de fi ning challenge of our age South Paci fi c Sea Level: A Reassessment (Science and Public Policy Institute (SPPI) Report, 18pp) Econometric analysis The impact of climate change on tourism in Germany, the UK and Ireland: A simulation study The income-temperature relationship in a cross-section of countries and its implications for global warming A review of WTA/WTP studies Positive mathematical programming Agricultural and environmental policy models: Calibration, estimation and optimization . Manuscript (209 pp). Agricultural & Resource Economics Climate change reconsidered: 2009 Report of the Nongovernmental International Panel on Climate Change (NIPCC) Working Group I contribution to the third assessment report of the Intergovernmental Panel on Climate Change Working Group I contribution to the fourth assessment report of the Intergovernmental Panel on Climate Change Climate Change Farm-level analysis of economic and agronomic impacts of gradual climate warming Adaptation of global climate change at the farm level Environment, energy and economy: Strategies for sustainability Learning about environmental damage: Implications for emissions trading Optimal early action on greenhouse has emissions The climate change learning curve Super freakonomics: Global cooling, patriotic prostitutes, and why suicide bombers should buy life insurance Impact of climate change on tourist demand Global crises, global solutions Cool it. The skeptical environmentalist's guide to global warming Solutions for the world's biggest problems: Costs and bene fi ts Smart solutions to climate change. Comparing costs and bene fi ts The amenity value of Italian culture 1491 New revelations of the Americas before Columbus Birds on the rise Carbon-cycle feedbacks increase the likelihood of a warmer future McCarl GAMS user guide. Version 22 A simple state-contingent pricing rule for complex intertemporal externalities Climate change A critique of the Stern report The impact of global warming on agriculture: A Ricardian approach Country-speci fi c market impacts of climate change The year China discovered the world Human contribution to more-intense precipitation extremes Impacts of 2xCO2 on Manitoba agriculture Health and amenity effects of global warming Science and civilization in China To slow or not to slow: The economics of the greenhouse effect Managing the global commons: The economics of climate change Accompanying notes and documentation on development of DICE-2007 model: Notes on DICE-2007.delta.v8 as of A review of the "Stern review on the economics of climate change Weighing the options of global warming policies Warming the world. Economic models of global warming Anthropogenic greenhouse gas contribution to fl ood risk in England and Wales in autumn 2000 Growth in emission transfers via international trade from The British climate change act: A critical evaluation and proposed alternative approach The climate fi x: What scientists and politicians won't tell you about global warming The Hartwell paper: A new direction for climate policy after the crash of The amenity value of climate to households in Germany Pesticide residues in eagles A Ricardian model of climate change in Canada Positive feedback between global warming and atmospheric CO2 concentration inferred from past climate change Agricultural adaptation to climate change: Issues of long-run sustainability (Agric. Econ The impact of global warming on U.S. agriculture: An econometric analysis of optimal growing conditions Constraints on future sea-level rise from past sea-level change Retraction: Constraints on future sea-level rise from past sea-level change Forest management, conservation and global timber markets Mosquito. A natural history of our most persistent and deadliest foe The economics of climate change: The Stern review Climategate. A veteran meteorologist exposes the global warming scam Estimates of damage costs of climate change The marginal damage costs of carbon dioxide emissions: An assessment of the uncertainties The Stern review of the economics of climate change: A comment Missing feedbacks, asymmetric uncertainties, and the underestimation of future warming Non-annular atmospheric circulation change induced by stratospheric ozone depletion and its role in the recent increase of Antarctic Sea ice extent The economics of nature: Managing biological assets Barents Sea drift ice characteristics Green dragon: Dominion not death A regional analysis of climate change impacts on Canadian agriculture Handbook of operations research in natural resources A review of the Stern review on the economics of climate change On modeling and interpreting the economics of catastrophic climate change Reactions to the Nordhaus critique (18 pp) Some basic economics of extreme climate change (29 pp) Past rates of climate change in the Arctic Early 20th century arctic warming in retrospect Observed characteristics of hail size in four regions in China during 1980-2005 Solutions for the world's biggest problems. Costs and bene fi ts