CLIMATE RESEARCH
Clim Res, Vol. 18: 259–275, 2001 Published November 2
A complete and comprehensive calculation of the effects of increasing atmospheric CO2 concentration must overcome 3 closely connected problems: (1) calculation of the future trajectory of the air’s CO2 concentration, (2) calculation of its climatic effects, and (3) separation of the CO2 impacts from other climatic changes. The first problem involves humanity’s impact on the global carbon budget. Anthropogenic emissions of CO2 are mainly the result of fossil fuel (coal, gas and oil) use, which is related to energy consumption and, © Inter-Research 2001
*E-mail: [email protected]
REVIEW
Modeling climatic effects of anthropogenic
carbon dioxide emissions: unknowns and uncertainties
Willie Soon 1, 2*; Sallie Baliunas 1, 2; Sherwood B. Idso 3;
Kirill Ya. Kondratyev 4; Eric S. Posmentier 51 Harvard-Smithsonian Center for Astrophysics, Cambridge, Massachusetts 02138, USA
2 Mount Wilson Observatory, Mount Wilson, California 91023, USA
3 US Water Conservation Laboratory, Phoenix, Arizona 85040, USA
4 Research Centre for Ecological Safety, Russian Academy of Sciences, St. Petersburg 197110, Russia
5 Long Island University, Brooklyn, New York 11201, USA
ABSTRACT:
A likelihood of disastrous global environmental consequences has been surmised as a result of projected increases in anthropogenic greenhouse gas emissions. These estimates are based on computer climate modeling, a branch of science still in its infancy despite recent substantial strides in knowledge. Because the expected anthropogenic climate forcings are relatively small compared to other background and forcing factors (internal and external), the credibility of the modeled global and regional responses rests on the validity of the models. We focus on this important question of climate model validation. Specifically, we review common deficiencies in general circulation model (GCM) calculations of atmospheric temperature, surface temperature, precipitation and their spatial and temporal variability. These deficiencies arise from complex problems associated with parameterization of multiply interacting climate components, forcings and feedbacks, involving especially clouds and oceans. We also review examples of expected climatic impacts from anthropogenic CO2 forcing. Given the host of uncertainties and unknowns in the difficult but important task of climate modeling, the unique attribution of observed current climate change to increased atmospheric CO2 concentration, including the relatively well-observed latest 20 yr, is not possible. We further conclude that the incautious use of GCMs to make future climate projections from incomplete or unknown forcing scenarios is antithetical to the intrinsically heuristic value of models. Such uncritical application of climate models has led to the commonly held but erroneous impression that modeling has proven or substantiated the hypothesis that CO2 added to the air has caused or will cause significant global warming. An assessment of the merits of GCMs and their use in suggesting a discernible human influence on global climate can be found in the joint World Meteorological Organisation and United Nations Environmental Programme’s Intergovernmental Panel on Climate Change (IPCC) reports (1990, 1995 and the upcoming 2001 report). Our review highlights only the enormous scientific difficulties facing the calculation of climatic effects of added atmospheric CO2 in a GCM. The purpose of such a limited review of the deficiencies of climate model physics and the use of GCMs is to illuminate areas for improvement. Our review does not disprove a significant anthropogenic influence on global climate.
KEY WORDS: Climate change · Climate model · Global warming · Carbon dioxide
Resale or republication not permitted without written consent of the publisher
A complete and comprehensive calculation of the effects of increasing atmospheric CO2 concentration must overcome 3 closely connected problems: (1) calculation of the future trajectory of the air’s CO2 concentration, (2) calculation of its climatic effects, and (3) separation of the CO2 impacts from other climatic changes. The first problem involves humanity’s impact on the global carbon budget. Anthropogenic emissions of CO2 are mainly the result of fossil fuel (coal, gas and oil) use, which is related to energy consumption and, hence, the world economy.
One convenient scheme studies these relationships within the framework of 4 independent variables: CO2 released per unit energy, energy consumed per unit of economic output, economic output per person and population (Hoffert et al. 1998, Victor 1998). That perspective raises one major question—Can economy and technology be sufficiently well prescribed that future energy consumption can be reliably predicted?—and leads to a subsequent question— What controls the physical exchanges of CO2 and how do these factors control the apportionment of anthropogenic CO2 emissions among various reservoirs of the climate system?
With respect to these questions, we note that about one-third of humanity’s carbon production has remained in the atmosphere, with a less certain division between the terrestrial biosphere and oceans (Field & Fung 1999, Joos et al. 1999, Rayner et al. 1999, Giardina & Ryan 2000, Schimel et al. 2000, Valentini et al. 2000, Yang & Wang 2000), while economic prediction is a notoriously complex proposition that is even less well defined (Sen 1986, Arthur 1999).
The second and third problems belong to the natural sciences. Here, climate scientists seek a theory capable of describing the thermodynamics, dynamics, chemistry and biology of the Earth’s atmosphere, land and oceans. Another fundamental barrier to our understanding and description of the climate system is the inherent unpredictability of even a seemingly deterministic set of equations beyond a certain time horizon (Lighthill 1986, Essex 1991, Tucker 1999). The good news is that attempts to estimate the global weather or climate attractor directly from the primitive equations governing large-scale atmospheric motions yield a finite bound (Lions et al. 1997).
An additional difficulty concerns the logistics of modeling a system with spatial and temporal scales that range from cloud microphysics to global circulation. Fortunately, this difficulty can be circumvented because of empirical ‘loopholes’ such as the existence of gaps in the energy spectrum of atmospheric and oceanic motions that allow for the separation of various physical and temporal scales. If, for example, climate is viewed as an average over a hypothetical ensemble of atmospheric states that are in equilibrium with a slowly changing external factor, then, under a regular external forcing factor, one may hope to anticipate the change (Houghton 1991, Palmer 1999).
Essentially all calculations of anthropogenic CO2 climatic impacts make this implicit assumption (Palmer 1999). But, in order for such a calculation to have predictive value, rather than merely to represent the sensitivity of a particular model, a model must be validated specifically for the purpose of its type of prediction. As a case in point, we note that in order to predict climate responses to individual forcings such as the long-lifetime greenhouse gas (GHG) CO2, the shorter-lifetime GHG CH4, the inhomogenously distributed tropospheric O3 and atmospheric aerosols, separate and independent validations are required. A logistically feasible validation for such predictions is essentially inconceivable.
The downside of exploiting the energy gap loophole is that relevant physical processes must be parameterized in simple and usable forms. For example, most general circulation models (GCMs) treat radiation with simple empirical schemes instead of solving the equations for radiative energy transfer (Shutts & Green 1978). Chemical and biological changes in the climate system are also highly parameterized. Clearly some empirical basis and justification for these parameterizations can be made but because the real atmosphere and ocean have many degrees of freedom and connections among processes, there is no guarantee that the package assembled in a GCM is complete or that it can give us a reliable approximation of reality (Essex 1991).
Going beyond the issue of limited computing resources, Goodman & Marshall (1999) and Liu et al. (1999) have elaborated on various schemes of synchronous and asynchronous coupling for the highly complex atmosphere and ocean GCMs, while warning of the extreme difficulty inherent in deciphering the underlying physical processes of the highly tangled and coupled responses. A call to eschew the direction of all efforts into the scale-resolved physical approach in current formulations of GCMs has also been voiced by Kirk-Davidoff & Lindzen (2000).
Another important point has been raised by Oreskes et al. (1994): it is impossible to have a verified and validated numerical climate model because natural systems are never closed and model results are always non-unique. It follows from Oreskes et al. that the intrinsic value of a climate model is not predictive but heuristic. Therefore, the proper use of a climate model is to challenge existing formulations (i.e., a climate model is built to test proposed mechanisms of climate change) rather than to predict unconstrained scenarios of change by adding CO2 to the atmosphere.
2. SIMULATING CLIMATE VARIABLES
Consider the nominal, globally averaged number of 2.5 W m–2 that is associated with the total radiative forcing provided by the increases of all GHGs since the dawn of the Industrial Revolution. Alternatively, consider a doubling of the air’s CO2 concentration that adds about 4 W m–2 to the troposphere-surface system.
In order to appreciate the difficulties of finding climatic changes associated with these forcings, it is only necessary to consider the energy budget of the entire earth-climate system. Neglecting the nonphysical flux adjustments for freshwater, salinity and wind stress (momentum) that are also applied in many contemporary GCMs (see discussion in Gordon et al. 2000, Mikolajewicz & Voss 2000), there are artificial energy or heat flux adjustments as large as 100 W m–2 that are used in some GCMs to minimize unwanted drift in the ocean-atmosphere coupled system (Murphy 1995, Glecker & Weare 1997, Cai & Gordon 1999, Dijkstra & Neelin 1999, Yu & Mechoso 1999).
Models that attempt to avoid artificial heat flux adjustments fare no better because of other substantial biases, including major systematic errors in the computation of sea-surface temperatures and sea ice over many regions, as well as large salinity and deep-ocean temperature drifts (Cai & Gordon 1999, Russell & Rind 1999, Yu & Mechoso 1999, Gordon et al. 2000, Russell et al. 2000). Also, uncertain global energy budgets implicit in all GCMs vary by at least 10 W m–2 in empirically deduced fluxes for the shortwave and longwave radiation and latent and sensible heat within the surface-atmosphere system (Kiehl & Trenberth 1997).
In addition, Grenier et al. (2000) have called for a simultaneous focus on tropical climate drift caused by heat budget imbalances at the top of the atmosphere while balancing the surface heat budget, because systematic biases in outgoing longwave radiation of as large as 10 to 20 W m–2 are not uncommon in coupled ocean-atmosphere GCMs. Those artificially modified and uncertain energy components of contemporary GCMs place severe constraints on our ability to find the imprint of a mere 4Wm–2 radiative perturbation associated with anthropogenic CO2 forcing over 100 to 200 yr in the climate system.
This difficulty explains why all current GCM studies of the climatic impacts of increased atmospheric CO2 are couched in terms of relative changes based on control, or unforced, GCM numerical experiments that are known a priori to be incomplete in their forcing and feedback physics. Soon et al. (1999), for example, identified documented problems associated with models’ underestimation or incorrect prediction of natural climate change on decade-to-century time scales. Some of those problems may be connected to difficulties in modeling both the natural unforced climate variability and suspected climate forcings from volcanic eruptions, stratospheric ozone variations, tropospheric aerosol changes and variations in the radiant and particle energy outputs of the sun. Another predicament is the inability of short climatic records to reveal the range of natural variability that would allow confident assessment of probability of climatic changes on time scales of decades to centuries.
Most importantly, it is premature to conclude on the basis of the magnitude of forcing— 4 W m–2 for a doubling of CO2 versus 0.4 W m–2 for July insolation changes at 60°N induced by the earth’s orbital variations over about 100 yr, a contrast made by Houghton (1991)—that the climatic changes by human-made CO2 will overwhelm the more persistent effects of a positional change in the earth’s rotation axis and orbit. The latter form of climate change through gradual insolation change is suspected to be the cause of historical glacial and inter-glacial climate oscillations, while the potential influence of added CO2 can only be guessed from our experiences in climate modeling.
In addition, it would also be premature to conclude on the basis of the magnitude of approximately 0.5 to 1.0 W m–2 forcing by the intrinsic solar variation on decade-to-century scale, versus the 0.4 W m–2 for July insolation changes at 60° N, that the climatic impact of variable solar irradiance forcing should be less dramatic than that of the Pleistocene glacial cycles. Historical evidence reveals natural occurrences of large, abrupt climatic changes that are not uncommon (Alley 2000). They occur without any known causal ties to large radiative forcing change.
Phase differences between atmospheric CO2 and proxy temperature in historical records are often unresolved; but atmospheric CO2 tends to follow rather than lead temperature and biosphere changes (Priem 1997, Dettinger & Ghil 1998, Fischer et al. 1999, Indermühle et al. 1999). In addition, there have been geological times of global cooling with rising CO2 (during the middle Miocene about 12.5 to 14 Myr BP, for example, with a rapid expansion of the East Antarctic Ice Sheet and with a reduction in chemical weathering rates), while there have been times of global warming with low levels of atmospheric CO2 (such as during the Miocene Climate Optimum about 14.5 to 17 Myr BP, noted by Panagi et al. 1999). In order to cast the anthropogenic or natural CO2 forcing as the cause of rapid climate change, various complex climatic feedback and amplification mechanisms must operate.
Most of those mechanisms for rapid climatic change are neither sufficiently known nor understood (Marotzke 2000, Stocker & Marchal 2000). (Apparently, a fast trigger such as increased atmospheric methane from rapid release of trapped methane hydrates in permafrosts and on continental margins, through changes in temperature of intermediate-depth (a few hundred meters below sea level) water, may be one example of a key ingredient for amplification or feedback leading to large climatic change [Kennett et al. 2000].)
2.1. Temperature
How well do current GCMs simulate atmospheric temperatures? As noted by Johnson (1997), the appearance of the IPCC (1990) report marks the recognition that all GCMs suffer from the ‘general coldness problem’, particularly in the lower tropical troposphere and upper polar troposphere (Regions 1, 3 and 5 in Fig. 1a, which make a total of 105 simulations). The general coldness problem is seen in 104 out of the 105 outcomes in Regions 1, 3 and 5, from 35 different simulations by 14 climate models. What is the cause of that ubiquitous error? Johnson (1997) suggests that most GCMs may suffer from extreme sensitivity to systematic physical entropy sources introduced by spurious numerical diffusion, Gibbs oscillations or inadequacies of sub-grid-scale parameterizations. Johnson estimated that a biased temperature of 10°C may be expected from only a 4% error in modeling net heat flux that is linked to any number of a physical entropy sources (including those arising from numerical problems with the transport and change of water substances in forms of vapor, liquid and ice and the spurious mixing of moist static energy). The analysis of Egger (1999) seems to support this result and calls for the evaluation of high-order statistical moments such as entropies to check on the quality of numerical schemes in climate models.
A follow-on detailed numerical study by Johnson et al. (2000) sheds further light on how this critical cold-bias difficulty associated with spurious positive definite entropy contaminates the computation of hydrologic and chemical processes (by virtue of their strong inherent dependence on temperature). It is estimated that error in saturation-specific humidity doubles for every 10°C increase in temperature. The coldness problem also extends to the stratosphere (Fig. 1b), where Pawson et al. (2000) have shown that the cold bias is more uniformly distributed. The range of the cold bias in the globally mean temperatures is about 5 to 10°C in the troposphere and greater than 10°C for the stratosphere.
Pawson et al. suggest that the particular coldness problem for the stratosphere is more likely associated with problems in physics such as the underestimation of radiative heating rates, because models have too little absorption of solar radiation by ozone in the near infrared. Alternatively, perhaps there is too much longwave emission in the middle atmosphere so that climate models overcool their stratospheres. Other unresolved problems concern the physical representation of gravity wave momentum deposition in the stratosphere and mesosphere, and the generation of gravity waves in the troposphere (McIntyre 1999).
Fig. 1. (a) Illustration of the cold-temperature bias problem in the in simulations produced by 14 different GCMs.(Note that some GCMs produced more than 1 simulation so that the total number of cases compiled for each of the 6 regions can be more than 14). Indicated in each box are the model temperature biases relative to observations. (From Johnson 1997). In Regions 1, 3 and 5, model results consistently show a cold bias. (b) Note that the cold bias problem–the fact that most GCM curves lie to the left of the observed temperature line labeled TOVS–extends into the stratosphere. (From Pawson et al. 2000)
As for why we discuss the stratosphere when our main concern is the lowest level of the troposphere where plants, animals and people live, there is documented evidence that inclusion of this important layer of the atmosphere can improve even weather prediction within the troposphere (Pawson et al. 2000). More important, it has only recently been appreciated that the dynamics of the stratospheric polar vortex, in close coupling to the vertically propagating tropospheric planetary wave, is a key parameter governing variability of the troposphere-stratosphere winter circulation under different climate regimes on interdecadal time scales (Kodera et al. 1999, Perlwitz et al. 2000). Therefore, in order to address properly the climatic response of added atmospheric CO2 (or for that matter any number of external forcings under consideration), a GCM that resolves the stratosphere appears to be another necessity.
What about surface temperatures? Notable here is the recent evaluation by Bell et al. (2000) of the interannual changes in surface temperature of the control (unforced) experiments from 16 different coupled ocean-atmosphere GCMs of the Coupled Model Intercomparison Project (CMIP) (Fig. 2). Bell et al. found that the majority of the GCMs significantly underestimate the observed,detrended world-wide averaged surface temperature variability over the oceans (Fig. 2b) while they overestimate such variability over land (Fig. 2c). This systematic difference is most clearly illustrated by the ratio of the over-land to over-ocean temperature variability in Fig. 2d. The authors discuss various factors, such as forcing agents (CO2, solar variability and volcanic eruptions) and the GCMs’ underestimation of El Niño-Southern Oscillation (ENSO) variability, that could be responsible for the systematic discrepancy between observed and GCM-predicted interannual temperature variability.
They eventually settled on nonphysical representations of land surfaces that lead to lower soil moisture and larger land temperature variability than do more realistic land surface schemes. Bell et al. also point out another problem in most GCMs: too much variability in the models’ surface temperatures over both land and sea at high latitudes, where excessive interannual variability in the GCMs’ predictions of snow and sea ice coverage is noted. The findings of Bell et al. (2000) should not be surprising, as physical modeling of land processes is particularly difficult, laden as it is with many unknown factors and large uncertainties. For example, Pitman et al. (1999) determined that for tropical forest annually averaged simulations varied by 79 W m–2 for the sensible heat flux and 80 W m–2 for the latent heat flux in 16 different GCMs. Over grassland, the range was 34 W m–2 and 27 W m–2, respectively. The models’ simulations of temperature differed by 1.4 K for tropical forest and 2.2 K for grassland.
Another important concern arises from the tradeoff between realism and complexity. For example, new climate drifts appear in atmospheric GCMs with explicit treatment of land variables such as soil moisture or snow water mass that are quantified in terms of systematic and incremental drifts (Dirmeyer 2001). Such a serious investment in model complexity is important for numerical weather prediction and may be needed for treating climate forcing by anthropogenic CO2, as discussed in Section 4.
Fig. 2. Comparisons of detrended 1959–1998 observed surface temperature variability with the unforced results from 16 different GCMs of the CMIP. (Temperature variability is calculated from the rms standard deviation of the annually averaged data.) The statistically significant difference between the observed and GCM ratios of the land/ocean variability (d) has been shown to be associated with an inadequate or incorrect parameterization of land surface processes. (From Bell et al. 2000)
2.2. Precipitation
Soden (2000) has documented a problem in the current generation of GCMs that stems from the inability of some 30 different atmospheric GCMs in the Atmospheric Modeling Intercomparison Project (AMIP) to reproduce faithfully interannual changes in precipitation over the tropics (30°N to 30° S). Fig. 3 depicts the good agreement between observations and the GCMs’ simulations of atmospheric water vapor content, tropospheric temperature at 200 mb, and outgoing longwave radiation (OLR), but it also reveals the poor agreement between observations and model simulations of precipitation and net downward longwave radiation at the surface. Considering especially the more direct association of latent heat release from precipitation of moist air to the warming and cooling of the atmosphere, Soden (2000) warned that the good agreement between the observed and modeled temperature at 200 mb (Fig. 3c) is surprising in light of the large differences for a simultaneous comparison of the precipitation field (Fig. 3a).
This comparison suggests that the temperature agreement at 200 mb could be fortuitous, since the atmospheric GCMs were forced with observed seasurface temperatures, while the modeled interannual variabilities of the hydrologic cycle are seriously underestimated by a factor of 3 to 4. Based on the models’ relatively constant values of downward longwave radiation reaching the surface (Fig. 3e), Soden (2000) points to possible systematic errors in current GCM representations of low-lying boundary layer clouds. However, the study cannot exclude the possibility of errors in algorithms that retrieve precipitation data from observations made by satellites, which would emphasize the need for improved precipitation products.
Fig. 3. Comparison of the observed (thick solid line) tropical-mean interannual variations of (a) precipitation (<äP>), (b) total precipitable water vapor (<äW>), (c) temperature at 200 mb (<äT200>), (d) outgoing longwave radiation (OLR) at the top of the atmosphere (<äOLR>), and (e) the net downward longwave radiation at the surface (<äLWsfc>) with the ensemble- mean of 30 AMIP GCM results (the thin solid curve overlaid with vertical lines showing the range of 1 intermodel standard deviation of the ensemble mean). Contrast the good agreement for simulated water vapor, 200 mb temperature and OLR with the internally inconsistent results for precipitation and net surface longwave radiation. (All climate simulations were forced with observed SST.) (From Soden 2000)
2.3. Water vapor
Soden (2000) highlighted the positive ability of GCMs to simulate the correct sign and magnitude of the observed water vapor change in Fig. 3b. This conclusion agrees with the extensive review by Held & Soden (2000) on water vapor feedbacks in GCMs. Held & Soden called for a clearer recognition of GCMs’ proficiency in calculating the water vapor feedback (which diagnoses model ability to simulate the residual between evaporation and precipitation rather than evaporation or precipitation per se) versus GCMs’ representation of the more complicated physics related to the cloud forcing and feedback.
However, it is important to add that the latest analyses of the interannual correlation between tropical mean water vapor content of the atmosphere and its surface value continue to show significant differences for the vertical patterns derived from rawinsonde data and outputs of GCMs, including those of the newer AMIP2 study (Sun et al. 2001). Essentially, in comparison with rawinsonde data, GCMs exhibit too strong a coupling between mid-to-upper tropospheric water vapor and surface water vapor. Water vapor in GCMs has also been found to have a stronger dependence on atmospheric temperature than the empirical relation deduced from observations.
Finally, purely numerical problems also exist; they are associated with physically impossible, negative specific humidity in the Northern Hemisphere (NH) extra-tropics caused by problematic parameterization of steep topographical features (Rasch & Williamson 1990, Schneider et al. 1999)
2.4. Clouds
In Fig. 4, we show the sensitivity of the parameterization of the large-scale formation of cloud cover that is used in one state-of-the-art model (Yang et al. 2000). As parameterized, cloud cover is extremely sensitive to relative humidity, U, and to both Us, the saturated relative humidity within the cloud, and U00, the threshold relative humidity at which condensation begins. The creators of this GCM discuss how the formula is used to tune the formation of clouds (through large-scale condensation at high latitudes or near-polar regions) by 20 to 30% in order to match what is observed.
Fig. 4. The parameterized cloud cover is very sensitive (contrasted by cases A, B and C) to relative humidity, U, and to values of Us, the saturated relative humidity within the cloud, and U00, the threshold relative humidity at which condensation begins. (From Yang et al. 2000)
Other researchers, such as Grabowski (2000), emphasize the importance of the proper evaluation of the effects of cloud microphysics on tropical climate by using models that directly resolve mesoscale dynamics. Grabowski points out that the main effect of cloud microphysics is on the ocean surface rather than directly on atmospheric processes. Because of the great mismatch between the time scales of oceanic and atmospheric dynamics, Grabowski was pessimistic about quantifying the relation between cloud microphysics and tropical climate. Clearly, the parameterizations of cloud microphysics and cloud formation processes, as well as their interactions with other variables of the ocean and atmosphere, remain major challenges for climate modelers.
Given the range of uncertainties and numerous unknowns associated with parameterizations of important climatic processes and variables, what should one expect from current GCMs for a scenario with an increased CO2 forcing? The most common difficulty facing the interpretation of many GCMs results is related to confusion arising from imposed natural and anthropogenic forcings that may or may not be internally consistent. This is why Bengtsson et al. (1999) and Covey (2000) have called for more inclusive consideration of all climate forcings, accurately known or otherwise, rather than a piecemeal approach that yields oversimplifications.
Many qualitative outcomes of forcing by anthropogenic GHGs have been postulated, such as changes in standard ocean-atmosphere variables of wind, water vapor, rain, snow, land and sea ice, sea level, and the frequency and intensity of extreme events such as storms and hurricanes (Soon et al. 1999), as well as more exotic phenomena, including large cooling of the mesosphere and thermosphere (Akmaev & Fomichev 2000), increased presence or brightness of noctilucent clouds near the polar summer mesopause (Thomas 1996, but see Gadsden 1998), increases in atmospheric angular momentum and length of day (Abarca del Rio 1999, Huang et al. 2001), and shrinking of surfaces of constant density at operating satellite altitudes (Keating et al. 2000). In these calculations, the benchmark forcing scenario is usually an emission rate of 1% yr–1 chosen to represent roughly the CO2 equivalent of the burden of all anthropogenic GHGs.
Although some of these studies claim an observational detection consistent with modeled CO2 effects, it is clear that even the theoretical claims, with their strong bias towards accounting for only the effects of GHGs, are neither robust nor internally consistent. A good example is the prediction for the change of the Arctic Oscillation (AO) pattern of atmospheric circulation by the year 2100. The AO is one of the key variability patterns of the wintertime atmospheric circulation over the NH, characterized broadly by a redistribution of air mass between polar regions and midlatitudes. Here, Zorita & González-Rouco (2000) found, using results from 2 different GCMs and a total of 6 simulations with different initial conditions, that both upward and downward tendencies in the intensity of the AO circulation pattern are likely under the same scenario of increasing atmospheric CO2. Apparently, internal model variability dominates those effects from the external forcing of CO2 and leads to an ambiguous expectation for a CO2-related signal in the modeled AO variability. This re-emphasis on unforced internal variability is consistent with the recent classification of the observed vertical structures of the AO into distinct perturbations originating in the troposphere versus stratosphere by Kodera & Kuroda (2000). Besides cautioning about the lack of robustness of previous claims for the AO owing to increased CO2 forcing, Zorita & González-Rouco highlighted the direct impact of that unknown on the calculation of the NH’s regional climate change in the extratropics.
Some theoretically predicted CO2 effects are not detectable unless a very high, or even extreme, level of CO2 loading is imposed. It is also predicted that a transient GCM experiment forced with the slightly lower CO2 emission growth rate of 0.25% yr–1, as opposed to the present growth rate of 0.4% yr–1, will ultimately lead to a relatively larger sea-level rise (based only on the thermal expansion of sea water; Stouffer & Manabe 1999). By the time the atmosphere’s carbon dioxide content is doubled, an additional 15 cm rise (the calculated global sea level rise for the emission case of 0.4% yr–1 is roughly 27 cm) is expected because the atmospheric heating anomaly of a world in which the carbon-dioxide emission rate is slower will have more time to penetrate deep into the ocean, thereby causing a relatively larger thermal expansion of seawater and hence a larger rise in the sea level.
One example of a problem with estimating the effects of a high level of atmospheric CO2 loading concerns potential changes in ENSO characteristics, for which no statistically significant change is predicted until the anthropogenic forcing is 4 times the preindustrial value (Collins 2000a). On the other hand, Collins (2000b) subsequently reported a surprising result—no significant change in ENSO characteristics occurred for a similar 4 × CO2 numerical experiment, based on an updated GCM with improved horizontal ocean resolution and no heat flux adjustment. Collins concluded that calculating ENSO response to increasing GHG forcing can depend sensitively and nonlinearly on subtle changes in model representations of sub-grid processes (rather than depending on gross model parameters such as ocean resolution and heat flux adjustment that are the main differences between the new and old versions of GCM he used). Thus, exploration of the parameter-space of coupled ocean-atmosphere GCMs, Collins concludes, is crucial for improved understanding. As for the statistics of recent ENSO variability, Timmermann (1999) has shown that the observed changes are not inconsistent with the null hypothesis of natural variability of a non-stationary climate. In addition, the careful case study by Landsea & Knaff (2000) confirmed the fact that no current climate model provided both useful and skillful forecasts of the entire 1997–1998 El Niño event.
3.1. Expected changes in seasonal temperatures?
We will consider 3 responses under the typical equivalent CO2-forcing scenario of 1% yr–1, starting with the seasons. Is the CO2-forced change expected to alter the character of seasonal cycles? If so, how do predictions compare with what is observed, at least over the last few decades?
Jain et al. (1999) examined this question by considering 3 parameters for the NH surface temperature: the mean temperature’s amplitude and phase, the equator-to-pole surface temperature gradient (EPG), and the ocean-land surface temperature contrast (OLC).
A comparison of observed and modeled EPG and OLC climatologies is summarized in Table 1. The results show that expected changes owing to CO2 forcing are often very small when compared to differences between the unforced GCM and observed values in EPG and OLC. Hence, detecting CO2 effects in seasonal differences of EPG and OLC may not be feasible.Table 1. Observations and predictions (both unforced GCM and CO2-forced GCM results) of seasonal and annual Northern Hemisphere (NH) equator-to-pole surface temperature gradients (in °C per 5° latitude; EPG) and oceanland surface temperature contrasts (in °C; OLC). (From Jain et al. 1999)
In light of these difficulties, seasonal cycles are probably not good ‘fingerprints’ for identifying the impact of anthropogenic CO2. This conclusion seems consistent with the independent finding by Covey et al. (2000) that showed seasonal cycle amplitude to depend only weakly on equilibrium climate sensitivity (i.e., equivalent to a varying climate forcing in the present comparison), based on the range of results from 17 coupled ocean-atmosphere GCMs from the CMIP. If these results are correct, then it is odd that seasonality in forcing (from geometrical changes in solar insolation by changing tilt angle of the earth’s rotation axis and the earth’s orbital position around the sun) is believed to cause very large changes in mean climate, but significant changes in mean forcing, e.g., from atmospheric CO2, cause only insignificant changes in the seasonal climatology.
3.2. Expected changes in clouds?
Next, consider clouds. Given the complexity of representing their relevant processes, can one expect to find a CO2-forced imprint in clouds? First, as Yao & Del Gino (1999) have noted, it is misleading to assert that increased cloud cover is evidence of CO2-produced global warming (i.e., a warming climate with more evaporation and, hence, more clouds). This is so because cloud cover depends more on relative humidity than on specific humidity. For example, under CO2-doubling experiments with different parameterization schemes, Yao & Del Gino (1999) predicted a decrease in global cloud cover, although there was an increase in mid- and high-latitude continental cloudiness. They also cautioned that because a ‘physical basis for parameterizing cloud cover does not yet exist,’ all predictions about cloud changes in response to rising atmospheric CO2 concentrations should be viewed carefully.
Others, such as Senior (1999), have emphasized the importance of including parameterizations of interactive cloud radiative properties in GCMs and called for a common diagnostic output such as the water path length within the cloud in control (unforced) experiments.
On another research front, Rotstayn (1999) implemented the detailed microphysical processes of a prognostic cloud scheme in a GCM and found a large difference in the climate sensitivity between that experiment and one with a diagnostic treatment of clouds. A stronger water vapor feedback was noted in the run with the prognostic cloud scheme than in the run with the diagnostic scheme, and that stronger water vapor feedback caused a strong upward shift of the tropopause upon warming. Rotstayn found that an artificial restriction on the maximum heights of high clouds in the diagnostic scheme largely explained the differences in climatic response. At this stage of incremental learning we conclude that no reliable predictions currently exist for the response of clouds to increased atmospheric CO2. So sensitive are certain cloud feedbacks to cloud microphysics, for example, that a lowering of the radius of low-level stratus-cloud droplet size from 10 to 8 µm would be sufficient to balance the warming from a doubling of the air’s CO2 concentration. Likewise, a 4% increase in the area of stratus clouds over the globe could also potentially compensate for the estimated warming of a doubled atmospheric CO2 concentration (Miles et al. 2000).
3.3. Expected changes in the oceans?
Finally, consider the oceans. Under an increased atmospheric CO2 forcing, e.g., of 1% yr–1, one commonly predicted transient response is a weakening of the North Atlantic thermohaline circulation (THC), owing to an increase in freshwater influx (Dixon et al. 1999, Rahmstorf & Ganopolski 1999, Russell & Rind 1999, Wood et al. 1999, Mikolajewicz & Voss 2000: see Fig. 5a). However, with an improved representation of air-sea interactions in the tropics, the significant weakening (or even collapse under stronger and persistent forcing) of the THC predicted by earlier GCMs cannot be reproduced (Latif et al. 2000: see Fig. 5b). (While considering Latif et al.’s results in Fig. 5b, it is useful to note from Fig. 5a that the coarser version of the Max Planck Institut für Meteorologie at Hamburg (MPI) model actually did predict a weakening of thermohaline circulation just like the other models in Fig. 5a.)
In another GCM experiment, Russell & Rind (1999) observed that, despite a global warming of 1.4°C near the time of CO2 doubling, large regional cooling of up to 4°C occurred in both the North Atlantic Ocean (56–80° N, 35°W–45° E) and South Pacific (near the Ross Sea, 60–72° S, 165°E–115° W) because of reduced meridional poleward heat transfer over the North Atlantic and local convection over the South Pacific. However, Russell et al. (2000) later demonstrated that the predicted regional changes over the Southern Ocean were unreliable because of the model’s excessive sea ice variability. Another GCM’s high-latitude southern ocean suffered a large drift (Cai & Gordon 1999). For example, within 100 yr after coupling the atmosphere to the ocean, the Antarctic Circumpolar Current was noted to intensify by 30 Sv (from 157 to 187 Sv), despite the use of flux adjustments. Cai & Gordon identified the instability of convection patterns in the Southern Ocean to be the primary cause of this drift problem.
Mikolajewicz & Voss (2000) further caution that there is still significant confusion about what mechanisms are most responsible for the weakening of the THC in various models, since different GCMs give contrasting roles to individual atmospheric and oceanic fluxes of heat, moisture, salinity and momentum.
In addition, several oceanographers (Bryden 1999, Holloway & Saenko 1999) have expressed concern about the lack of both physical understanding and realistic representation of ocean circulation in global models. Criticisms were especially directed towards the highly schematic representation of the North Atlantic THC as a conveyor belt providing linkages to the world’s oceans.
Holloway & Saenko (1999) state that: ‘understanding what makes the conveyor work is deficient, drawing mainly on the role of buoyancy loss leading to sinking [is] somewhat like trying to push a string. The missing dynamics are that eddies in the presence of bottom topography tend to set up mean flows that carry major circuits of the conveyors, allowing sunken water masses to ‘go for the ride’. Climate models have difficulty in both these regards—to include (if at all!) [sic.] a plausible Arctic Ocean and to deal with eddies either explicitly or by parameterization.’
In spite of those problems, a complete breakdown of the North Atlantic THC is predicted under a sufficiently strong CO2 forcing (Broecker 1987, Schmittner & Stocker 1999, Rahmstorf 2000, see, e.g., Manabe & Stouffer 1993 for scenarios forced by a quadrupling of atmospheric CO2). However, as pointed out by Rahmstorf & Ganopolski (1999), Wood et al. (1999) and Mikolajewicz & Voss (2000), the predicted changes of the THC are very sensitive to parameterizations of various components of the hydrologic cycle, including precipitation, evaporation and river runoff. Hence, without a perpetually enhanced influx of freshwater (from any source) or extreme CO2 forcing, the transient decrease in THC overturning eventually recovers as time progresses in the model (Holland et al. 2000, Mikolajewicz & Voss 2000). In addition, by including a dynamic sea ice module in a coupled atmosphere-ocean model, Holland et al. (2000) report a reduction (rather than an enlargement) in the variance of the THC overturning flow rate, under the doubled CO2 condition, down to 0.25 Sv2 (or only 7%) from the high value of 3.6 Sv2 simulated under the present-day forcing level.
Furthermore, Latif et al. (2000) have just reported a new stabilization mechanism that seems to change previous expectations of a CO2-induced THC weakening (Fig. 5b, but see also Rahmstorf 2000). In Latif et al.’s case, the state-of-the-art coupled ocean-atmosphere GCM of the MPI resolves the tropical oceans at a meridional scale of 0.5°, rather than the more typical scale of 2 to 6°, and produces no weakening of the THC when forced by increasing CO2. Latif et al. showed that anomalously high salinities in the tropical Atlantic (produced by excess freshening in the equatorial Pacific) were advected poleward to the sinking region of the THC; and the effect was sufficient to compensate for the local increase in freshwater influx there.
Fig. 5. Predicted (a) large changes (20 to 50% reductions in overturning rate by 2100) in the thermohaline circulation (THC) for 6 different coupled climate models (from Rahmstorf 1999) versus (b) a relatively stable THC response in a state-of-the- art MPI GCM with improved spatial resolution of tropical ocean (from Latif et al. 2000) under a similar CO2-forced scenario. The quantity shown is the maximum North Atlantic overturning flow rate in sverdrups (106 m3 s–1) at a depth of about 2000 m. Wood et al. (1999) noted, however, that the measure of the THC strength for the meridional overturning adopted here cannot be estimated from observations. They proposed the Greenland-Iceland-Scotland ridge, south of Cape Farewell at the southern tip of Greenland and the trans-Atlantic section at 24°N as 3 locations where more robust observations are available for comparison with GCM results
Hence, with the additional stabilizing degree of freedom from the tropical oceans, the THC remains stable under that CO2-forced experiment, leaving no reliable prediction for change in oceanic circulation in the North Atlantic under an added CO2 climate. Latif et al. concluded that the response of THC to enhanced greenhouse warming is still an open question. More recently, Delworth & Dixon (2000) added another mechanism that could serve to oppose the THC weakening effect under numerical experiments with increasing CO2.These authors, using their relatively coarser resolution GCM, found that, given an enhanced forcing owing to an increase in the westerly wind speed over the North Atlantic (as inferred from the observed pattern of the Arctic Oscillation over the last 30 yr), the THC weakening trend from greenhouse warming scenario could be delayed by several decades. Apparently, the stronger winds over the North Atlantic extract more heat from the ocean and hence cool the upper ocean, and they increase its density sufficiently to counteract temporarily some of the effects from net freshening over the North Atlantic because of a global warming. However, Delworth & Dixon noted that the excess freshening over the North Atlantic predicts a significant reduction of the THC eventually.
Rahmstorf (2000) summarized all earlier numerical experiments that proposed a significant (20 to 50%) reduction in the THC overturning rate under global warming scenarios by 2100. We emphasize that our highlighting of the contrasting GCM results by Latif et al. or by Delworth & Dixon, noting the preferable higher spatial resolution of Latif et al.’s GCM, does not undermine all previous model results. The exercise conducted here is meant to note the inconsistency among GCMs for the predicted changes in THC. We conclude that no robust or quantitative prediction of THC is currently possible.
Many questions remain open concerning what can be deduced from the current generation of GCMs about potential CO2-induced modifications of Earth’s climate. The climatic impacts of increases in atmospheric CO2 are not known with practical or measurable degree of certainty. Specific attempts to fingerprint CO2 forcing by comparing observed and modeled changes in the vertical temperature profiles have yielded new insights related to areas where model physics may be improved . One good example is the unrealistically coherent coupling between the lapse rate and tropospheric mean temperature in the tropics for variability over time scales of 3 to 10 yr (Gillett et al. 2000).
However, even the range of modeled global warming remains large and is not well constrained (Forest et al. 2000). For example, the aggregate of various GCMs gives a global climate sensitivity that ranges from 1.5 to 4.5°C (IPCC 1996) for an equilibrium response to a doubling of the atmospheric CO2 concentration. Räisänen (1999) more optimistically suggested that many of the qualitative inter-model disagreements in CO2- forced climate responses (including differing signs of predicted response in some variables, i.e., sea-level pressure, precipitation and soil moisture) could be attributed largely to differences in internal variability in different climate models. On the other hand, Räisänen cautioned that it may be dangerous to rely upon a single GCM for the study of climate change scenarios because ‘a good control climate might partly result from skillful tuning rather than from a proper representation of the feedbacks that are important for the simulation of climate change.’
Building partly on that idea, Forest et al. (2000) utilized the Massachusetts Institute of Technology (MIT) statistical-dynamical climate model to quantify the probability of expected outcomes by performing a large number of sensitivity runs, i.e., by varying the cloud feedback and the rate of heat uptake by the deep ocean. It turned out that the IPCC’s range of equilibrium climate sensitivity of 1.5 to 4.5°C corresponds roughly to only an 80% confidence interval of possible responses under a particular optimal value of globalmean vertical thermal diffusivity below the ocean’s mixed layer. The 95% probability range for the climate sensitivity as quantified by Forest et al. was 0.7 to 5.1°C; and, in the final analysis, Forest et al. determined the more relevant result for transient responses to a doubling of atmospheric CO2 to be a mean global warming of 0.5 and 3.3°C at the 95% confidence level. Forest et al. concluded, ‘climate change projections based on current general circulation models do not span the range of possibilities consistent with the recent climate record.’
There are arguments that the possible range of climate sensitivity and hence climate responses could be narrower. Specifically, both Yao & Del Gino (1999) and Del Gino & Wolf (2000) had proposed to revise this and to raise the value for the minimum climate sensitivity to a doubling of CO2 from 1.5 to 2.0–2.5°C because most GCMs may have incorrectly overemphasized the negative feedbacks from low clouds. Del Gino & Wolf have found evidence that low clouds get thinner, instead of thicker, with warming (mainly because of the more dominant ascent of the cloud base) in the subtropics and mid latitudes. Thinner low clouds with decreasing liquid water path length means a cloud less capable of reflecting sunlight, which ultimately lessens the impact from the low cloud-temperature cooling feedback carried in most GCMs.
Another scenario that apparently greatly affects climate response is the complex interaction of climate and global carbon cycles. In an extreme case, Cox et al. (2000) proposed a strong positive feedback of global warming that causes a dramatic release of soil organic carbon to the atmosphere. Cox et al. found that the inclusion of such a strong biophysical feedback in a coupled atmosphere-ocean GCM (added with both a dynamic global vegetation and global carbon cycle model) will increase the originally prescribed atmospheric CO2 from 700 to 980 ppm by the year 2100. This transient numerical experiment predicted a global warming of 5.5 K by 2100, compared to the 4 K scenario without the carbon cycle feedback.The corresponding warming over land is 8 K, instead of 5.5 K without the added atmospheric CO2 from the strong biophysical feedback. But, these authors acknowledged that their results depend critically on the model assumption of a long-term sensitivity of soil respiration to global warming, which may be contradicted by field and laboratory data (Giardina & Ryan 2000).
In contrast, semi-empirical estimates by Lindzen (1997) and Idso (1998) that included probable negative feedbacks in the climate system yielded a climate sensitivity of about 0.3 to 0.5 K for a doubling of atmospheric CO2. Furthermore, Hu et al. (2000) noted the tendency for climate model sensitivity, to variation in atmospheric CO2 concentration, to decrease considerably as the sophistication of parameterizing atmospheric convection increases.
In Hu et al.’s study, the change is from a decrease in the averaged tropical surface warming of 3.3 to 1.6 K for a doubling of CO2 that is primarily associated with the corresponding decrease in the calculated total atmospheric column increase in water vapor from 29 to 14%. The main point that emerges here is that the range of climate sensitivity remains large and it is not sufficiently well quantified either by empirical or theoretical means.
4.1. Causes of recent climatic change: aerosol forcing
Other recent efforts, such as that of Bengtsson et al. (1999), have highlighted the inconsistency between the differing observed surface and tropospheric temperature trends and simulated GCM trends that try to include forcing factors such as combined anthropogenic GHGs, anthropogenic aerosols (both direct and indirect effects), stratospheric aerosols from the Mount Pinatubo eruption, and changes in the distribution of tropospheric and stratospheric ozone. In addition, Roeckner et al. (1999) have discussed how superposing other forcings, such as direct and indirect aerosol effects, on the GHG forcing has led to an unexpected weakening of the intensity of the global hydrologic cycle. We also wish to add that surface or tropospheric warming in combination with lower stratospheric cooling does not uniquely signify a fingerprint of elevated CO2 concentration.
Such a change in temperature lapse rate is also the natural behavior of the atmosphere associated with potential vorticity anomalies in the upper air’s flow structure (Hoskins et al. 1985, Liu & Schuurmans 1990). This ambiguity precludes the detection of anthropogenic CO2 effects without additional, confirmatory information.
Not all researchers express a forcing by aerosols. For example, Russell et al. (2000) recently cautioned that ‘[o]ne danger of adding aerosols of unknown strength and location is that they can be tuned to give more accurate comparisons with current observations but cover up model deficiencies.’ Such an important caveat may give a better sense of urgency if one recalls that most current GCMs treat the effects of anthropogenic sulphate aerosols by merely rescaling surface albedo according to a precalculated sulphur loading (Räisänen 1999, Roeckner et al. 1999, Covey 2000). Furthermore, at least in the sense of direct radiative forcing, naturally occurring sources such as sea salt and dimethyl sulphide from marine phytoplankton, rather than anthropogenic sources (Haywood et al. 1999, Haywood & Boucher 2000, Jacobson 2001), dominate the variable and inhomogeneous forcing by aerosols.
For example, Jacobson (2001) estimated for all sky conditions that the global direct radiative forcing from combined natural and anthropogenic aerosols is about –1.4 W m–2, compared to an anthropogenic-only aerosol forcing (including black carbon component) of –0.1 W m–2. Haywood & Boucher (2000) stressed the fact that the indirect forcing effect of the modification of cloud albedo by aerosols could range from –0.3 to –1.8 W m–2, while the additional aerosol influences on cloud liquid water content (hence, precipitation efficiency), cloud thickness and cloud lifetime are still highly uncertain and difficult to quantify. Therefore, the formulation of an internally consistent approach to determine the climatic effects of CO2 by including both natural and anthropogenic aerosols in the troposphere remains a critical area of research (Haywood & Boucher 2000, Rodhe et al. 2000, Jacobson 2001).
4.2. Nonlinear dynamical perspective on climate change
A somewhat different interpretation of recent climate change is also possible (Corti et al. 1999, Palmer 1999). In an analysis of NH 500 mb geopotential heights, the authors showed that the record since the 1950s could essentially be projected in terms of the modes of 4 naturally occurring, shorter-term, atmospheric circulation regimes, identified in Corti et al. (1999) as Cold-Ocean-Warm-Land (COWL), Pacific North American Oscillation, North Atlantic Oscillation and Arctic Oscillation patterns. Then, climate variability, viewed as vacillations of these quasi-stationary weather regimes, can be quantified by changes in the probability density function associated with each regime. Palmer and olleagues thus proposed that the impact of anthropogenic CO2 forcing might be revealed as a projection onto modes of these natural weather regimes. Of course, there is no guarantee that the underlying structure of the weather regimes would remain the same under the perturbation of a different or stronger forcing.
Next, Corti et al. (1999) showed that recent observed changes could be interpreted primarily as an increasing occurrence probability associated with the COWL regime (Wallace et al. 1995), perhaps consistent with the projection of the anthropogenic CO2 forcing. With this idea in mind, the authors proposed to resolve the contentious discrepancy between the rising trend in surface air temperature versus the relative constancy of the lower tropospheric air temperature, as summarized in the NRC (2000) report, the rationale being that most of the recent hemispheric-mean temperature change is associated with the COWL pattern. Since the COWL pattern is primarily a surface phenomenon, one can expect to find a stronger anthropogenic CO2- forced temperature imprint at the surface than in the troposphere. Above the surface, the land-sea contrast weakens significantly so that no imprint of anthropogenic thermal forcing anomalies persist there. But such a pattern of climatic change—emphasizing surface response over land—seems also consistent with the heat island effect from urbanization, leaving interpretation of the vertical pattern of temperature trend unresolved.
It is, of course, a curious point that no GCM has yet simulated such a vertical pattern of climate change (Bengtsson et al. 1999). The strongest anthropogenic CO2 response in GCMs is still expected in the mid-to-high troposphere, simply because of the dominance of direct radiative effects. A further question left unanswered by Corti et al. (1999) is why increased CO2 should lead to an increase in the residence frequency of the COWL regime. Furthermore, any number of warming influences may contribute to the positive bias of COWL, since the main physical cause of the pattern is the heat capacity contrast between land and sea. In this respect, it is important to point out that the COWL pattern is a robust feature of unforced numerical climate experiments under various air-sea coupling schemes (Broccoli et al. 1998). But as emphasized by these authors, even though a direct comparison of observations with the model-derived unforced patterns and changes ‘has implications for the detection of climate change, [they] do not intend to attribute the recent warming of NH land to specific causes.’
Broccoli et al. (1998) conclude that separating forced and unforced changes in observational records is difficult. Hence, they focused strictly on pointing out the problem in the methodology introduced by Wallace et al. (1995) by applying the COWL-pattern variability for climate change detection. In doing so, they utilized a GCM run forced with CO2 and tropospheric sulphate aerosols to make their points, but they did not elaborate on results with CO2 forcing alone. Their main conclusion is that the decomposition method of Wallace et al. is not suitable for climate change detection, because it yields ambiguous results when more than 1 radiative forcing pattern (such as CO2 and tropospheric sulphate aerosols) is present.
The recognition of climatic change as responses of a non-linear dynamical system imposes the strong requirement that GCMs must accurately simulate natural circulation regimes and their associated variabilities down to regional and synoptic scales. This requirement is especially difficult to fulfill because the global radiative forcing of a few W m–2 expected from the anthropogenic CO2 perturbation is quite small compared to the uncertain energy budgets of various components of the climate system, as well as flux errors in model parameterizations of physical processes. For a perspective on the severity of this problem, consider the dynamic phenomenon of midlatitude atmospheric blocking. As part of the AMIP, D’Andrea et al. (1998) have recently confirmed the large differences in blocking behavior produced among the 15 to 16 GCMs that span a wide range of modeling techniques and physical parameterizations. When compared to observed blocking statistics, all GCMs showed systematic errors of underestimating both the blocking frequency and the duration of blocking events (almost all models have problems in producing long-lived blocking episodes over the midlatitude Euro-Atlantic and Pacific sectors). Worse still, there is also no clear evidence that highspatial- resolution models perform systematically better than low-resolution models. D’Andrea et al. (1998) have thus proposed only ad hoc numerical experiments to study the possible, previously hidden model deficiencies responsible for the large range of GCM performance in simulating atmospheric blocking. Therefore, significant challenges in numerical weather and climate modeling remain.
4.3. New observational scheme
Modeling is but one approach to understanding climate change. To place more confidence in climate modeling by computer, observational capability must advance. Improved precision, accuracy and global coverage are all-important requirements. For example, Schneider (1994) has estimated that a globally averaged accuracy of at least 0.5 W m–2 in net solar-IR radiative forcing is required to refine the present unacceptably large range in the estimates of climate sensitivity. In this respect, Goody et al. (1998) have recently proposed the complementary scheme of interferometric measurements of spectrally resolved thermal radiance and radio occultation measurements of refractivity—with help from Global Positioning System (GPS) satellites—that can achieve a global coverage with an absolute accuracy of 1 cm–1 in spectral resolution and 0.1 K in thermal brightness temperature. The resolution capability of 0.1 K is needed to quantify the expected warming from increased GHGs in 1 decade, while the accuracy of 1 cm–1 is needed to resolve differences in possible spectral radiance fingerprints among several causes. Along with a promised high vertical resolution of about 1 km, the complementary thermal radiances and GPS refractivity measurements should produce a better characterization of clouds, since thermal radiance is cloud sensitive but the refraction of GPS radio signals, while sensitive to water vapor and air molecules, is not affected by clouds. These observational schemes thus offer hope for critical tests of climate model predictions and for the detection of anthropogenic CO2 forcing before it becomes too large.
Our current lack of understanding of the Earth’s climate system does not allow us to determine reliably the magnitude of climate change that will be caused by anthropogenic CO2 emissions, let alone whether this change will be for better or for worse. We raise a point concerning value judgment here because a value assignment is prerequisite to evaluating the need for human mitigation of adverse consequences of climate change. If natural and largely uncontrollable factors that yield rapid climate change are common, are humans capable of actively modifying climate for the better? Such a question has been posed and cautiously answered in the negative, e.g., by Kellogg & Schneider (1974). Given current concerns about rapid climate change, several geoengineering proposals are being revived and debated in the literature (e.g., Schneider 1996, Betts 2000, Govindasamy & Caldeira 2000). We argue that even if climate is hypersensitive to small perturbations in radiative forcing, the task of understanding climate processes must still be accomplished before any effective action can be taken.
Our review of the literature has shown that GCMs are not sufficiently robust to provide an understanding of the potential effects of CO2 on climate necessary for public discussion. Views differ widely on the plausible theoretical expectations of anthropogenic CO2 effects, ranging from dominant radiative imprints in the upper and middle troposphere (based on GCM results) to nonlinear dynamical responses. Even if a probability could be assigned to a certain catastrophic aspect of CO2-induced climatic change, this measure can be objective only if all relevant facts, including those that are still in the future, are considered in the calculation. Therefore, at the current level of understanding, global environmental change resulting from increasing atmospheric CO2 is not quantifiable.
Systematic problems in our inability to simulate present- day climate change are worrisome. The perspective from nonlinear dynamics that suggests ‘confidence in a model used for climate simulation will be increased if the same model is successful when used in a forecasting mode’ (IPCC 1990, as quoted in Palmer 1999) also paints a dismal picture of the difficult task ahead.
This brief overview shows that we are not ready to tell what the future climate of the Earth will look like. The primary reason for our inability to do so is that, even if we have perfect control over how much CO2 humans introduce into the air, other variable components of the climate system, both internal and external, are not sufficiently well defined.
Also, all future climate scenarios performed in various GCMs must be strictly considered as mere numerical sensitivity experiments, instead of meaningful climate change predictions (Räisänen 1999, Mikolajewicz & Voss 2000). Attempts to integrate the environmental impacts of anthropogenic CO2 should note limitations in current GCMs and avoid circular logic (Rodhe et al. 2000).
In light of the above, we support a more inclusive and comprehensive treatment of the CO2 question, stated as an internally consistent scientific hypothesis, as demanded by the rules of science. Climate specialists should continue to urge caution in interpreting GCM results and to acknowledge the incomplete state of our current understanding of climate change. Progress will be made only by formulating and testing a falsifiable hypothesis.
The criticisms in this review are presented with the aim of improving climate model physics and the use of GCMs for climate science research. We recognize that there are alternative arguments and other interpretations of the current state of GCMs and climatic change (Grassl 2000). Furthermore, we are biased in favor of results deduced from observations. For an alternative view, we strongly recommend that the reader consult the IPCC reports (1990, 1995 and the upcoming 2001 report). These provide detailed documentation of the merits of GCMs, including the IPCC’s assessment of a discernible human influence on global climate. Our review points out the enormous scientific difficulties facing the calculation of climatic effects of added CO2 in a GCM, but it does not claim to disprove a significant anthropogenic influence on global climate.
Acknowledgements.
This work was supported by the National Aeronautics and Space Administration (Grant NAG5- 7635). E.S.P. acknowledges the support of the Long Island University Faculty Research Released Time program. We gratefully acknowledge the library help from staff of the John G. Wolbach Library. We are also thankful for constructive suggestions from all 3 reviewers and the editor, Chris de Freitas.
LITERATURE CITED
Editorial responsibility: Chris de Freitas,Auckland, New Zealand
Submitted: September 19, 2000; Accepted: January 8, 2001
Proofs received from author(s): July 10, 2001