Saturday, July 30, 2005

Emissions Scenarios

One major source of uncertainty in trying to predict how the climate is likely to change in the future, is uncertainty in the future emissions of greenhouse gases. The IPCC generated a set of emissions scenarios, published in 2001 (apparently no update is planned to coincide with AR4 - I don't know the reasoning behind this decision). The scenarios cover a wide range of possible future trajectories for demographic and economic change over the next 100 years, based on 4-6 main storylines (the A1 storyline has 3 main variants) each of which has about 10 variants. The main "marker" scenarios describe emissions ranging from 5 to 30Gt of carbon per year in CO2 by 2100 (compare to current emissions of about 7Gt per year).

In principle, we can simply convolve the uncertainty in scenarios with the uncertainty in climate response to generate a probabilistic forecast for future climate (and Wigley and Raper did exactly this back in 2001, in a paper in Science). However, I don't think it is as simple in practice as they indicated. There are in my opinion two major difficulties with trying to generate probabilistic forecasts using the scenarios.

The first problem is that the scenarios were all explicitly and deliberately predicated on no action being specifically taken to reduce GHG emissions (although some storylines include emission reduction as a side-effect of other environmental policies). Eg, from the Technical Summary:

As required by the Terms of Reference however, none of the scenarios in the set includes any future policies that explicitly address additional climate change initiatives, although GHG emissions are directly affected by non-climate change policies designed for a wide range of other purpose.

Now, although it may sometimes seem like not much is happening yet, in fact it seems clear to me that there is at least a modest groundswell of action in roughly the right direction. The Kyoto protocol is ratified, and even the USA is taking some steps towards mitigation (especially at the local level if not federal). Ok, it is not much so far, but give it a decade or two and it seems likely to me that the IPCC scenarios will prove to be overall a pessimistic viewpoint of where we are heading. So, a "forecast" based on them is at best a forecast of where we might have been going if we took no action at all to reduce emissions, not a forecast of where we are actually heading as of today. Of course more action could be taken (some will always argue that more action is needed, whatever is actually done), but any assessment should surely be based on a realistic assessment of how much action has already been taken and what is in the pipeline. I don't know why the scenarios were designed to exclude any mitigation effects, and it makes the decision to not update them seem rather unfortunate, but perhaps someone will have a good explanation for this.

The second problem is perhaps a little more subtle, and it is that there is no obviously correct way to attach probabilities to the individual scenarios. The scenarios are essentially presented as possibilities, with no assessment of their probabilities. Obviously they are intended to cover a range of reasonable possibilities (they would have little use otherwise) but they are quite explicitly NOT assigned any sort of relative likelihoods:
Preferences for the scenarios presented here vary among users. No judgment is offered in this report as to the preference for any of the scenarios and they are not assigned probabilities of occurrence.
This is reinforced again in the summary, even more explicitly (with my bold emphasis):
Probabilities or likelihoods are not assigned to individual SRES scenarios. None of the SRES scenarios represents an estimate of a central tendency for all driving forces and emissions, such as the mean or median, and none should be interpreted as such. The statistics associated with the frequency distributions of SRES scenarios do not represent the likelihood of their occurrence. The writing team cautions against constructing a central, "best-estimate" scenario from the SRES scenarios; instead it recommends use of the SRES scenarios as they are.
W&R take this as a green light to assign equal probability to each scenario. They say: "We therefore assume all 35 emissions scenarios to be equally likely" (my emphasis). It is not entirely clear from the wording in their paper if they believe that this is a reasonable deduction based on the SRES giving no preference, or if they are acknowledging that this is an entirely personal judgement on their part. If the former, they are clearly wrong, and if the latter, they are certainly entitled to make this assumption if they believe it is appropriate. but it must be clearly flagged as their own opinion, and the resulting forecast should not presented as if it is an objective one based on the IPCC TAR, as Karl and Trenberth did in referring to the W&R paper in Science, 2003:
In the absence of climate mitigation policies, the 90% probability interval for warming from 1990 to 2100 is 1.7C to 4.9C.
There is no such thing as "the 90% probability interval for warming". There is W&R's 90% probability interval, based on their beliefs about scenarios (and their beliefs about climate sensitivity, but IMO adopting the the IPCC's "likely" range of 1.5-4.5C is relatively uncontroversial). Karl and Trenberth may also endorse this estimate if they agree with W&R's assumption. But it is not comparable to (say) the 90% confidence interval for the number of heads in 100 tosses of a fair coin.

As I've mentioned before, there is also a subjective element in the estimate of climate sensitivity - but at least this is based on a considerable amount of evidence and has been the subject of substantial debate amongst climate scientists. In contrast, the probabilistic distibution over future scenarios seems little more than a wild guess made purely on the grounds of convenience.

So, it's one thing to poke holes in research, but that leaves the question of what climate scientists should do instead. In my view, it seems unwise (and is certainly unnecessary) for them to try to make socioeconomic forecasts when economists are not prepared to do so. The obvious alternative, suggested in the SRES itself, is simply to use the different scenarios (perhaps just the marker scenarios) and present the results from each one separately. That means giving a number of probabilistic forecasts, each of which is conditional on an emissions scenario. This also makes it simple for climate scientists to make up their own scenarios which include mitigation, and demonstrate the effects that mitigation could have. This is, of course, exactly the sort of information that policy-makers should find useful. After all, future emissions are at least in part a controllable input and what we we all want to know is, to what extent we should try to control them?

Anyone who wants to make a probabilistic estimate of climate change based on their estimates of emissions and climate response is welcome to do so, of course. However, even though one can reasonably use the IPCC's estimate of climate sensitivity as the basis for one input to the calculation, there is no such consensus interpretation of the scenarios, so the assignment of probabilistic weights is entirely the researcher's own responsibility. A deliberately ignorant "uniform prior" might be defensible from a Bayesian viewpoint, but the results will be highly dependent on this assumption and I for one have little confidence in them.

Tuesday, July 26, 2005

Simulator 2

It's in the news now, so I think I can safely blog about this without fear of causing embarrassment:


Here is today's Japan Times article on plans for a new 10PFlop supercomputer, Simulator 2. Yes, that's 10 petaflops, a whopping 250 times faster than the Earth Simulator and up to 73 times faster than the current fastest Blue Gene system. Plans are a bit vague, but with a planned switch-on date of 2010 it would probably leapfrog all the competition by quite a way, as the Earth Simulator did back in 2001.


You can find out more about it here and here, but only in Japanese!


I believe the research remit for ths use of this new machine might be slightly broader than the current Earth System Science focus (hence Simulator 2, not Earth Simulator 2). The plans still seem pretty vague but my experience here suggests they don't usually publicise things that are likely to fall through. Whether or not I'll still be here to use it is very much less certain...

Monday, July 25, 2005

The Great Tokyo Earthquake


Well, the long-overdue Great Tokyo Earthquake hit on Saturday afternoon. I was up a mountain so didn't feel it (apparently, the greater crustal thickness reduces the impact, and anyway I was miles away).


Of course, it wasn't really the big earthquake that has been widely expected for some time now (they seem to occur about every 70 years and the last was 1923, when about 140,000 were killed). But it was the biggest for over a decade, enough to shut down the trains for a couple of hours. Here is a brief description of a simulation of that previous earthquake, using the Earth Simulator.


Our own earthquake detection system rated last Saturday's quake as a plastic-man-toppler, but the rubber ducks (and indeed loudspeaker) survived. And now there's a typhoon on the way too (but only a small one). I guess someone is telling me it's time for a post about natural disasters.

Monday, July 18, 2005

An overview of probabilistic climate prediction


In this post, I'm going to be exclusively discussing the problem of estimating climate sensitivity (equilibrium surface air temperature response of the atmosphere-ocean system to doubled CO2), which has been a widely-studied subject for many years. It's not quite as valuable to policy-makers as a direct estimate of how the climate will change over time, but climate sensitivity is a good target for study since it is a simple easy-to-understand concept that directly relates to more complex projections, and is also accessible to a wide range of models from simple to complex. By sidestepping the issue of the large uncertainty in future emissions, we can also focus on the geophysical rather than socio-economic uncertainties, the latter of which are at least partially controllable.


The IPCC says that climate sensitivity is likely to be in the range of 1.5-4.5C, an estimate which has not changed for many years. Originally this was based on very limited evidence, but subsequent research appears to confirm that this early estimate was a pretty good (if lucky) one. However, there is still a 3 degree difference between the high and low ends of this range (which are themselves not hard limits), and moreover the use of a "likely range" is a rather woolly description of the uncertainty. So it would be nice to have a better answer.


Firstly, we need to understand where the uncertainty comes from. The basic radiative response to increased CO2 is well understood (although not by me, in any great detail). A doubling of CO2 will raise the surface temperature by about 1C, other things being equal. But other things aren't equal. The warmer temperature will almost certainly result in changes to the amount and distribution of water vapour in the atmosphere - which is itself a greenhouse gas - and this will probably add substantially to the warming due to CO2 alone. I'm not going to go into the details here (partly because I am no expert on the subject) but the IPCC take on water vapour can be found here. Cloud distributions are also likely to change, and they can have both a cooling effect (due to increased albedo) and a warming one (insulation, especially at night). The overall balance depends on several details such as the height at which they form. There are numerous competing hypotheses, but limited direct evidence for how these things are likely to change (the IPCC summary is here).


One approach to estimating climate sensitivity does not attempt to evaluate these individual uncertainties. Instead, we can look at historical changes in the climate, and relate this to the changes in radiative forcing. In principle, this will give us a measure of the climate sensitivity directly, without having to determine the detailed interaction of every effect. Two early attempts using recent decades/century can be found here and here, and I've discussed a recent attempt here (see also here for more analysis of that work). Unfortunately, since we do not know the net forcing accurately enough (especially the cooling due to suphate aerosols) this approach does not produce a useful constraint. Another approach involves looking at the observed cooling due to volcanic eruptions. This seems like a potentially more useful approach to me as the forcing is large and fairly well observed, but the effect of natural interannual variability means that the volcanic cooling is somewhat obscured in the observed record. Also, determining the relationship between the (short-term) response to a volcano and the longer term response to increased CO2 relies on at least a moderately sophisticated model, which introduces another layer of uncertainty. For example, the coupled atmosphere-ocean MIROC3.2 model appears to respond less strongly to volcanoes than the simpler MAGICC model of Wigley and Raper, given the same climate sensitivity (which can be tuned in MAGICC to emulate different GCMs). At still longer time scales, we can look to the paleoclimate record. The Last Glacial Maximum had a much colder climate, lower greenhouse gases and much larger ice sheets than at present. A direct calculation of the implications for climate sensitivity is presented here. Further back into history, we find evidence of a much warmer climate in the Cretaceous, with an estimated CO2 of perhaps 2000ppm.


These estimations all have their limitations, due to uncertainties in both the forcing and the global temperature change. But they do present a broadly consistent picture in the context of the uncertainty of the IPCC estimate. It's important to realise in particular that the "sensitivity could be >10C" result that one gets by looking at the last few decades alone does not in any way contradict or invalidate other methods that suggest a substantially lower bound. It is simply using a particular subset of the available information and saying that this information alone does not limit climate sensitivity. All of the various estimates point to most likely values within or at least close to the standard IPCC range.


Another approach to estimating climate sensitivity is to take a more directly model-based method in order to evaluate all those uncertain feedbacks I mentioned above. What this entails is building a model, based on our understanding of the basic physics of the climate system, and then cranking up the CO2 to see what happens. The IPCC collates results from all the major climate research centres, and last year Gerald Meehl stated that the most recent model results lie in the range of about 2-4.4C. There may be some differences with previous results, which some might consider important, but it is a small change, very much evolution rather than revolution.


Unfortunately, these models all contain a vast number of parameters which control the model behaviour, and which are not directly constrained by theory or observations. Examples might be such things as "the average speed that ice crystals fall" or "droplet size in clouds". Some of them describe rather abstract concepts and it is not clear how they could be measured even in principle, let alone practice. The range of plausible values often extends through orders of magnitude, and changing the parameters can have a significant effect on the model behaviour. Until recently, parameters were generally adjusted by hand until the model behaviour looked "good" according to the rather subjective opinion of the model builders - a time-consuming and tedious process, that leaves open the possibility that other "good" parameter values would result in a model that simulates our current climate reasonably well, but which has a substantially higher or lower climate sensitivity.


If the uncertainty in the basic physics can be quantified, then this can in principle provide a constraint on climate sensitivity. Although many of the parameters cannot be directly determined, it is clear that the overall interaction between the physical processes actually generates our current climate. So if, by appropriately sampling parameter uncertainty, we can generate a ensemble of models that simulates our climate reasonably, then this should give the answer. This is a technically complex task, due to the computational demands, but recently, several groups (including me) have been working on methods for generating probabilistic estimates in this manner.


There is an important weasel word in the above paragraphs - the qualifier "reasonably" in "simulates our climate reasonably" is not easy to quantify. This is an issue which has only shot to prominence recently, so I will attempt to explain it in a bit more detail. No model can hope to simulate our climate perfectly, so it is natural to ask the question: how bad does a model have to get before we decide that it is no longer considered credible? Although I've stated that in simple enough terms, it turns out that this is a rather difficult question to answer, and it has not (to my knowledge) been addressed anywhere in the climate prediction literature to date. This does suggest that there could be significant limitations in much previous probabilistic climate estimation work which may therefore need re-evaluation. Our first steps towards directly accounting for model error in climate prediction are described in a couple of as-yet unpublished manuscripts here and here (I'm sure there are imperfections in this work, and don't claim it is anything more than a small step in roughly the right direction). Jonty Rougier is doing a lot of interesting work in this area, and his recent papers are well worth a read - this manuscript provides a nice summary of the theoretical foundations. Another difficult question is how we decide on a prior distribution for the parameters, especially since their meaning is not always clear and the expert's "prior" opinion is liable to be influenced by his knowledge of model output when run at different parameter values (ie it is not really a "prior" and there is a risk of double-counting evidence). Both of these factors can substantially affect the results.


One crucial take-home message is that there is no way to generate a truly objective estimate of climate sensitivity, and never will be. The strength of various strands of evidence needs to be assessed and weighed up by experts, and even an objective method necessarily relies on subjective decisions regarding the inputs. This is partly what motivated my recent ideas about "betting on climate change", since a prediction/betting market provides a transparent and open mechanism for aggregating subjective (albeit informed) opinions. It may be a little too far out in left field to catch on seriously, though.


In recent work using ensembles of GCM simulations, Murphy et al generated a distribution for climate sensitivity which was a bit higher than the IPCC TAR. Stainforth et al used a diffferent approach and found that a substantial proportion of their ensemble had sensitivity of about 10C, but due to the uncertainties mentioned above, they did not attempt to give a direct probabilistic interpretation of their results (unfortunately, it seems that some people didn't quite grasp this point). One particular aspect of the Stainforth et al work, which I was very surprised by, was their decision to only assess the annually-averaged climate of their models rather than look at the seasonal climate (as Murphy et al did). The seasonal cycle does not provide a brilliant constraint on climate sensitivity but I strongly suspect it would substantially narrow their extremely broad distribution. In my cynical moments, I wonder if they didn't look at seasonal variation precisely because it would have eliminated their "exciting" results :-) I understand a more careful analysis will soon be forthcoming. However, it will still inevitably involve some subjectivity over deciding how poor a seasonal cycle is considered acceptable, and it will be interesting to see how they address this issue.


My bottom line is that there are no data which suggests that climate sensitivity is greater than about 6C, and there are also no models with a sensitivity of 6C or more which have been shown to provide credible simulations of the present and past climate. Perhaps the best candidate for such a model is the "high sensitivity" version of the MIROC3.2 model. This has a sensitivity of 6.3C and gives a reasonable simulation of the present day climate; however it provides a very poor hindcast of recent temperature changes (particularly the post-Pinatubo cooling, see the manuscript referred to above). The "lower sensitivity" (4.0C) version of MIROC3.2 generates a much better hindcast. In contrast to this, the ultra-sensitive models of Stainforth et al have not been shown to simulate even the seasonal cycle adequately. I wouldn't be shocked if the IPCC estimate of the range of plausible climate sensitivity creeps up a little bit, but as far as I can see none of the strands of evidence point to a value significantly above 5C as being likely, even if such a value cannot be categorically ruled out.


Disclaimer: my recent (as yet unpublished) work points towards an upper limit of about 6C (and a most likely value rather lower than this). But I honestly did not go looking for such a result, and much higher values would certainly have been more readily publishable :-)


And what prospects are there for the future? This post is already far too long, so I'll save that for another time.

Friday, July 15, 2005

AAAS response to Barton letters

Seems a bit late, given that Barton demanded responses by the 11th, but the AAAS has at last issued a statement and letter objecting to the Barton letters:
we are concerned that establishing a practice of aggressive Congressional inquiry into the complete professional histories of scientists whose findings may bear on policy in ways that some find unpalatable could have a chilling effect on the willingness of scientists to conduct work on policyrelevant scientific questions.

Chris Mooney thinks that "Barton's little gambit has now officially backfired" but that doesn't seem so clear to me. Apart from the handwringing, there are as yet no signs that anyone actually is going to stop Barton from harassing scientists whose research he finds threatening, and if that has a "chilling effect", well, maybe that is his goal....


I look forward to finding out how the Barton 5 replied. Stoat promises revelations on Sunday!

Saturday, July 09, 2005

"As the science justifies"

The G8 agreement on climate change, clean energy and sustainable development is posted here

It says:

Climate change is a serious and long-term challenge that has the potential to affect every part of the globe. We know that increased need and use of energy from fossil fuels, and other human activities, contribute in large part to increases in greenhouse gases associated with the warming of our Earth's surface. While uncertainties remain in our understanding of climate science, we know enough to act now to put ourselves on a path to slow and, as the science justifies, stop and then reverse the growth of greenhouse gases.
So, what does as the science justifies mean here? Is it a parenthetical (as the science already amply justifies) or is it the qualifier when and if our scientific understanding justifies? There is more than a hint in a a speech that Bush gave a few years ago:
My administration is committed to cutting our nation's greenhouse gas intensity -- how much we emit per unit of economic activity -- by 18 percent over the next 10 years. This will set America on a path to slow the growth of our greenhouse gas emissions and, as science justifies, to stop and then reverse the growth of emissions.
So it looks like business as usual then...

Update: just saw this in the Guardian:
But the environment secretary, Margaret Beckett, said it was "absolute rubbish" to claim the G8 summit had not signed up to anything new.
Sorry Margaret, you are clearly wrong on this. It's almost a word-perfect copy of Bush's previous speech. He's already doing this (in his interpretation of the meaning).

Friday, July 08, 2005

More on alarmism

I mentioned some time ago that I didn't really think there was much alarmism in the climate science debate (by which I mean a similar style, but diametrically opposing viewpoint, to the sceptics who do not admit the reality of the IPCC-defined consensus on climate change). According to my viewpoint, on the one hand we had the rational, honest, fair scientists who put the evidence dispassionately, and on the other hand there are the sceptics who lie and cheat to advocate their politically-motivated agenda.

Of course I've always realised that this is somewhat naive and over-simplified (William Connolley has a nice analysis of some borderline scaremongering from FoE, for example, but they are not front-line scientists), but I thought it was basically correct. However, recently I've been rather surprised - and more than a little disappointed - at some statements from scientists who clearly should know better. I recently wrote about the Andreae, Jones and Cox paper here, complaining about how they place an unreasonably alarmist focus on the theoretical possibility of extremely high (~10C) climate sensitivity - a possibility that can be effectively ruled out by alternative analyses (and Gavin Schmidt at RealClimate draws a similar conclusion here).

Now it's the turn of the Climateprediction.net project and in particular Tim Palmer from ECMWF. Tim Palmer gave a talk on global warming at the Royal Society back in April, and a video recording of it is availble on the web here. He describes the climateprediction.net results thusly (about 1/3 of the way through the lengthy recording):
It's a sort of probability distribution of global warming....it disagrees with IPCC 2001 in a very key and important aspect...the collective probability of all these members in the tail [~10C warming] is 10% or so
This is wrong and will have seriously misled the audience. The CPDN team were quite careful in their description of their results. Read their paper carefully (you have to look in the Methods section for the crucial comment), and you will see that they actually do not claim that these results have any sort of probabilistic interpretation - the "exciting" aspect of their results is merely that they managed to find climate sensitivities of 11C in a GCM (to which I say big deal, I can get a runaway greenhouse with our model, but never mind). In fact, this experiment was essentially a (rather expensive) fishing trip in which they went looking for the most extreme model results they could possibly generate. People who are heavily involved in probabilistic climate prediction will spot this and realise that the paper is not claiming too much. But of course the media and general public see the distribution and draw the obvious conclusion. SCARY GLOBAL WARMING COULD KILL US ALL!

Either Tim Palmer was fooled by the spin himself, or he gave a deliberately misleading presentation. Frankly, I'm not sure which is worse. I don't think that simplifying things for the sake of the audience is an acceptable excuse, because the simplification is in fact a serious distortion of what their results mean.

As for where that leaves my search for an alarmist to bet against, it's not so clear. Palmer and Andreae were talking about climate sensitivity (steady-state response to doubled CO2) rather than temperatures directly. This is not directly measurable and different values will not lead to substantially different global temperatures over the next few decades. Moreover, they are only talking about a fairly small probability anyway (even though it should be much much smaller).

So their exaggeration is not quite at the same level as the sceptics who deny the immediate reality of global warming and make predictions which will be clearly proven wrong in a few decades at most. OTOH these scientists are supposed to be the "good guys" and should be competent and honest. The tactic seems to be that the stuff in the peer-reviewed literature is just about ok (potentially misleading, but defensible), but then it is spun out of all reasonable proportion in the press releases and public presentation. Colour me disappointed.

Thursday, July 07, 2005

A bet with Bashkirtsev and Mashnich?

Some time ago, I mentioned the reluctance of Mashnich to bet on his forecast of global cooling by 2021-2026.

Since that post, he got in touch with me, explaining that this time scale was a bit long for him (I don't know his age) but saying that he and co-author Dr Bashkirtsev would be willing to accept my $10,000 wager for a rise/fall in temperature over the shorter time interval of 2012-2017 compared to 1998-2003. Of course over a sufficiently short time scale, the probability of anomalous cooling increases (1999 was cooler than 1998, for example) but I still consider this bet to be clearly enough in my favour to accept it. Even with the extraordinary 1998 temperatures dropping out of the average, 2000-2005 looks set to be warmer than 1998-2003, and everyone with a clue expects continued warming in the absence of massive external perturbation such as a huge volcano or nuclear winter.

So, if they are serious in their offer, the first firm bet may be placed shortly (there are some details such as which measurement of temperature they would like to use, for example). However, it was back on the 21st June that they emailed me. I replied within a day but have not heard from them since. So maybe they are getting cold feet.

Tuesday, July 05, 2005

10C? Not likely!

I've found myself pointing people in the direction of this RealClimate page a few times, so I thought I might as well post my own thoughts on the possibility of extremely high climate sensitivity (= equilibrium temperature change for doubled atmospheric CO2 concentration) that has been suggested in several papers, and most recently Andreae, Jones and Cox (Nature 2005, subsequently AJC).

The basic approach that AJC and several previous papers use is to look at the following globally-integrated heat balance equation

c dT/dt = DQ - L.DT ... (1)

which describes how the average temperature (T) changes through time in response to radiative forcing DQ, with c being the climate system's heat capacity and L describing the radiative response to temperature change ('d' indicates a derivative, 'D' is a finite perturbation from the pre-industrial equilibrium). L determines climate sensitivity via

DT2 = DQ2/L ... (2)

where DQ2 is the radiative forcing due to doubled CO2 (about 3.7 W/m2). Note that eqn 2 is just the steady state version of eqn 1, rearranged and evaluated at doubled CO2.

Eqn 1 rearranges to

L = (DQ - c dT/dt) / DT ... (3)

We have estimates for all of the terms on the RHS of eqn 3 over the past 40-100 years or more, and so in principle we can just plug them in, out drops L and thus climate sensitivity.

Unfortunately, the terms on the RHS of eqn 3 are not known precisely, but all have uncertainties associated with them. In particular, the net radiative forcing DQ is highly uncertain. The radiative forcing has two major components, anthropogenic CO2 and sulphate aerosols. Although the radiative forcing of CO2 is well-known to be substantial and positive (currently about 2.4 W/m2), the aerosol effect is highly uncertain, with estimates ranging from zero to a negative forcing with greater magnitude than the CO2 effect. When Gregory et al did this calculation back in 2001, they found that L could turn out to be zero or even negative, meaning an unbounded or (absurdly) negative climate sensitivity. We have two major unknowns - DQ and L - and therefore (with hindsight) it is hardly surprising that we cannot simultaneously constrain them with a single equation.

The AJC paper is already being publicised as "new research" that "global warming looks set to be much worse than previously forecast" (eg in this NewScientist article). It is, of course, no such thing. It is a "Progress" article, in other words a review paper which by design does not present substantially new research. The authors do include a simple carbon cycle feedback, which makes the warming slightly worse, but other than that they basically repeat an old approach which is well-known to not provide a useful constraint on climate sensitivity.

Now, I certainly don't mean to be critical of those who originally tried this approach and found it unhelpful, but having got the result, and understood the reasons behind it, it would seem more productive to look for other ways of estimating climate sensitivity rather than repeat this method that cannot work and throw ones hands in the air in despair about how dreadfully scary everything is and how WE'RE ALL GOING TO DIE UNLESS WE STOP BURNING FOSSIL FUELS.

This heat balance approach in no way indicates that the more moderate range of values for climate sensitivity previously presented by the IPCC (1.5-4.5C) is any less likely than was previously thought - it only shows that this one new approach does not provide a useful constraint. Fortunately, there are many other lines of argument that can be used to constrain climate sensitivity (including evidence from paleoclimate, post-volcanic cooling, and complex GCM modelling), and although these approaches also have potential weaknesses and limitations, they all basically agree that the most plausible value for climate sensitivity is around the 3-4C mark, and some of them also seem to clearly rule out anything approaching 10C. In this context, the claim in the accompanying News article seems more than a little exaggerated:
Andreae acknowledges that there are many uncertainties about his study. But he points out that it is the best estimate we have so far. "This forces us to accept that pessimistic climate scenarios are much more plausible than had been thought," he says.
Sorry, but I do not accept this at all. There are plenty of much better estimates. There may be a case for raising the IPCC range marginally (say to 2-5C, or 1.5-6C if they want a firmer confidence interval than the previous "likely") but I would be very very surprised to see it go any higher than that.

Update: Gavin Schmidt gives a more detailed analysis of the AJC paper on RealClimate here, and draws much the same conclusion.

Monday, July 04, 2005

"If I did get such a letter, I would become desperate"

The Chronicle: Daily news: 07/01/2005 -- 01


Those who think that the Barton letters are just reasonable "oversight" should have a read of what Professor Hans von Storch says, towards the bottom of this news article. von Storch has previously been outspoken in his criticism of Mann, calling his work "rubbish."


I don't know of any scientist who could answer such questions. It's just not the way our work is done - there is far too much pressure for rapid and new results for us to maintain full "audit trails" and answer an unlimited number of questions from any troublemaker with too much time on their hands. By the time 5 years have passed, our work is either irrelevant and forgotten, or else superceded, either because it really was wrong, or because someone else improved on it. In either case, complete replication of the original is a waste of time.

Sunday, July 03, 2005

To-no-take

Blogger claims we can upload pictures now, so here's a test. Ah, it seems to work with Safari, but not Netscape 7.2.


We spent the day in The Tanzawa-Oyama Quasi-National Park, climbing To-no-take. We were surprised to get very close to some deer - there is always evidence of them, but they are usually shy. These were obviously accustomed to the crowds.

It being rainy season, the views were spectacular. Here is jules on the summit, with Fuji-san in the background. It was very different last time we were here, in the middle of February a few years ago.

Saturday, July 02, 2005

Rep. Waxman Questions Treatment of Climate Scientists

Committee on Government Reform Minority Office

Waxman has written an interesting letter to Barton concerning his harassment of several climate scientists (also blogged here, here and here).

Waxman's letter refers to Barton's statment that
the Committee [on Energy and Commerce] must have full and accurate information when considering matters relating to climate change policy
and makes the important point:
If the Committee indeed has a genuine interest in the science of global warming, you should withdraw these letters and instead schedule a long-overdue Commiittee hearing on climate change.
There's little chance of that happening, of course. But it does look like this fight is heading straight into the political arena, which is where it belongs.

Friday, July 01, 2005

Now you see it - now you don't

Stoat: First look at S C's MSU vn5.2?

WMC beat me to the punch on posting a graph of the corrected S&C satellite data (he was keen enough to do the spatial averaging himself, but I was waiting for S&C to do the sums).

But now the data has been taken down again (the data directory, sorted by time stamp is here - the 5.2 version is not there at the moment). It's all very mysterious.

I guess that means the race is back on...