Sunday, September 24, 2017

Running hot...or not?

The question has been asked (repeatedly): are the CMIP models “running hot”? By which it is not meant, are the models too warm - they have a wide range of temperature biases which are normally subtracted off by the use of anomalies (which is a separate debate) - but whether they are warming up too much relative to observations.

But I don't care about that, because I've been running too! It's been a bit warm in Hamburg and humid too, so I was a bit apprehensive about this morning's half marathon up and down a bit of river bank at the north edge of the city. 




However I didn't need to worry about that, it was grey and chilly this morning. What I should have been more concerned about is the lack of recent training and surfeit of pastries (not to mention currywurst).

It's a funny affair with another identical half marathon going off 20 mins ahead of us, that being the “Cup” event (part of a series of three races). (Fortunately I didn't find that web page until just now or I might have had to enter all of them.) But the cup runners are not all that quick, so I spent most of the race overtaking them. This wasn't really a problem as the small field of 500 runners was fairly well strung out by the time I caught them. The course was a riverside path, just hard-trodden earth which was mostly dry but a little slippery in parts.



It wasn't all as flat and smooth as this!

Plenty of sharp turns and short rises too. Despite being about 500m too short, it was still a personal worst, slower even than my very first half marathon when I'd never run that far before! 9th finisher in my race in 1:29:14, 2nd MV45 and also well beaten by one woman who was 2nd overall.

Saturday, September 23, 2017

Currywurst

I had curry for lunch on Thursday.

It was the wurst!

(Actually it was rather good, however we forgot to take a pic so you'll have to make do with this less appealing version from wikipedia.)



By massive coincidence I saw this tweet from Gavin on the same day:


which quotes from this NY Times article.

Google tells me Curry's been all over this "fundamentally dumb" idea like a rash. It must have seemed like a good wheeze to earmark some funding and publicity for those who can't raise it on the merits of their research. But now she's obvioulsy been tapped up for membership of the “team”, it's finally dawned on her that she'd have to work with a bunch of crazies and losers who have no idea what the hell they are talking about.

What hasn't dawned on her yet, is that that's where she belongs.

Seriously, who is she trying to kid? This is the very same Judith Curry who infamously puffed some brain-meltingly abysmal drivel by Murray Salby, doesn't know what the word “most” means, and wrapped herself in flags of convenience but couldn't explain what they meant. To name just three episodes early in her blogging career before I gave up even bothering to check what she was saying.

Apropos of not very much, she sent me her CV a couple of days ago.



Wonder why she thought I might be interested in it?

This “red team” stuff is hardly new. Who can forget the “Not the IPCC” report that never saw the light of day? Or the various attempts to set up sceptical journals or scientific societies that are invariably still-born (or more often, never-born). You think they'd work it out eventually. Same shit, different day, as they say in Georgia.

Thursday, September 21, 2017

Beyond equilibrium climate sensitivity

New(ish, but I'm just getting round to writing about it) review article by Knutti et al on climate sensitivity. The detailed review of published estimates is impressive, a lot of work must have gone into that. It has been spotted that the Callendar estimate is wrong: the value in the paper is about 1.8C for a doubling of CO2, which is rather lower than the value plotted in the figure. (This calculation ignores changes in clouds, so it's impressively close to what we would estimate today for the same processes).

Probably the most important aspect of the update, however, is summarised in the figure of how radiative imbalance changes with temperature as a model warms up (after an abrupt quadrupling of CO2). Simple linear first-order modelling of the energy balance would suggest that the points should lie on a straight line, with the intercepts on the y and x axes being the initial forcing  and the equilibrium temperature change respectively (and these values can be halved to get those pertaining to a doubling of CO2). A handy consequence of this is that the equilibrium response could be estimated in a climate model, without the need to run the model to equilibrium. Based on this idea (often referred to as the “Gregory method ”), the equilibrium sensitivities of the CMIP models are typically estimated on the basis of a 150 year simulation following a quadrupling of CO2.

However models - and quite probably, the real world - doesn't behave like that. Instead, the points appear to cluster around a curve which implies the true equilibrium change is greater than that which would be estimated from analysis of an initial segment of the run.




I can't help wonder how rapidly and widely this method would have been accepted if it had been proposed by someone less eminent. I suspect there would be more of a “nice idea, but it doesn't really work that well”. Incidentally, the behaviour is nothing to do with quadrupling per se, you get similar results for greater and lesser forcing changes. I believe quadrupling was just chosen (rather than the more conventional doubling) to get a greater signal/noise ratio in the changes.

Tuesday, September 19, 2017

Make our Planet Great Again

Our kind host has pointed us towards the German call for applications for 4-year fellowships under the joint France-Germany “Make our Planet Great Again” program. This was originally Macron's brainchild, which attracted a certain amount of media attention possibly disproportionate to its scientific importance. Now the Germans have jumped on board with an essentially parallel (albeit smaller) scheme which offers awards of up to €1.5m over 4 years to attract overseas scientists to set up groups in Germany, again focussing on climate and sustainable energy sciences. It may not be a huge initiative but it will surely be very attractive to a lot of people, including perhaps those in the UK who are uncertain what Brexit will bring. If we were remotely interested in going abroad and setting up a new research group we'd probably be applying. But we aren't.

Monday, September 18, 2017

More D&A and FvB.

By chance I happened to notice another paper with an interesting title appearing in Climatic Change on the very same day as the recent Mann et al paper: Is the choice of statistical paradigm critical in extreme event attribution studies? While my noticing it was fortuitous, the publication date was no coincidence, as it was clearly intended as a "comment on" in all but name. I am not particularly impressed by such shenanigans. I know that Nature often publishes a simultaneous commentary along with the article itself, but these are generally along the lines of a sycophantic laudation extolling the virtues of the new work. The climatic change version seems to be designed to invite a critical comment which does not provide the authors under attack any right to reply. Jules and I were previously supposed to be a victim of this behaviour when we published this paper. However the commentary never got written, so in the end we suffered nothing more than a lengthy delay to final publication.

Anyway, back to the commentary. Is the choice of statistical paradigm critical? I can't really be bothered discussing it in any detail. The arguments have been hashed out before (including on this blog, e.g. here). The authors provide a rather waffly defence of frequentist approaches without really providing any credible support IMO, based on little more than some rather silly straw men. Of course a badly-chosen prior can give poor results, but so can an error in your computer program or a typo in your manuscript, and no-one argues that it's better to just take a guess instead of doing a calculation and writing down the answer. Well, almost no-one.


Wednesday, September 13, 2017

Blueskiesresearch.org.uk: Winton betting market

There could be something paywalled in the FT about this, the global warming policy foundation forum farce have copied a bit and managed to post a quote from somewhere saying
"A leading global warming expert believes that the latest UN warning on man-made climate change is a "big gamble" as temperatures have not increased since 1997"
Not really much of an expert then.

I heard about the Winton thing at the AGU, where Mark Roulston had a poster in the betting and finance session. Good to heard it’s progressing.

Thanks to Victor Venema I've seen the full article which seems entirely reasonable and doesn't contain the GWPF quote anywhere, so I guess they just randomly stuck it on to make themselves look ridiculous.

Edit: the actual site seems to be here though not functional as yet.

BlueSkiesResearch.org.uk: Blue Skies Research Ltd!

After thinking about it for a few years we have finally bitten the bullet and incorporated Blue Skies Research Ltd as a private company. Since there are two of us, we couldn’t very well set up as a sole trader and a partnership didn’t seem particularly attractive either. For our situation, the tax situation seems fairly similar in all cases: income tax and NI contributions on the one hand, versus corporation and dividend taxes on the other.

What precipitated the action is still under discussion, but will probably (hopefully!) be blogged about in the future some time. The overarching aim is to enable us to collaborate officially in research and funding applications with other partners: if anyone out there has a bit of spare end-of-project budget burning a hole in their pocket, or is considering a funding application that could benefit from our expertise, then we would certainly be interesting in hearing about it but we aren’t really planning a huge push for funding and world domination. Well not quite yet anyway 🙂

Saturday, September 09, 2017

Encyclopedia of Geosciences

As per title. A work in progress (well, hardly started) so I wouldn't want to be too critical. A prize for the first person to find out where “climate change” is located.

That's all for now :-)

Thursday, September 07, 2017

More on Bayesian approaches to detection and attribution

Timely given events all over the place, this new paper by Mann et al has just appeared.  It's a well-aimed jab at the detection and attribution industry which could perhaps be held substantially responsible for the sterile “debate” over the extent to which AGW has influenced extreme events (and/or will do so in the future). I've argued against D&A several times in the past (such as here, here, here and here) and don't intend to rehash the same arguments over and over again. Suffice to say that it doesn't usefully address the questions that matter, and cannot do so by design.

Mann et al argue that the standard frequentist approach to D&A is inappropriate both from a simple example which shows it to generate poor results, and from the ethical argument that “do no harm” is a better starting point than “assume harmless”. The precautionary versus proactionary principles can be argued indefinitely, and neither really works when reduced ad absurdum, so I'm not really convinced that the latter is a strong argument. A clearer demonstration could perhaps have been provided by a rational cost-benefit analysis in which costs of action versus inaction (and the payoffs) could have been explicitly calculated. This would have still supported their argument of course, as the frequentist approach is not a rational basis for decisions. I suppose that's where I tend to part company with the philosophers (check the co-author list) in preferring a more quantitative approach. I'm not saying they are wrong, it's perhaps a matter of taste.

[I find to my surprise I have not written about the precautionary vs proactionary principle before]

Other points that could have been made (and had I been a reviewer, I'd probably have encouraged the authors to include them) are that when data are limited and the statistical power of the analysis is weak, it is not only inevitable that any frequentist-based estimate that achieves statistical significance will be a large overestimate of the true magnitude of the effect, but there's even a substantial chance it will have the wrong sign! A Bayesian prior solves (or at least greatly ameliorates) these problems. Another benefit of the Bayesian approach is the ability to integrate different sources of information. My favourite example of the weakness of traditional D&A here is the way that we can (at least this was the case a few years ago) barely “attribute” any warming of the world's oceans under this methodology. The reason for this is that the internal variability of the oceans is large (and uncertain) enough that we cannot be entirely confident that an unforced ocean would not have warmed up by itself. On the other hand, it is absurd to believe the null hypothesis that we haven't warmed it, as it has been in contact with the atmosphere that we have certainly warmed, and the energy imbalance due to GHGs is significant, and we've even observed a warming very closely in line with what our models predict should have happened. But D&A can't assimilate this information. In the context of Mann et al, we might consider information about warming sea surface temperatures as relevant to diagnosing and predicting hurricanes, for example, rather than relying entirely on storm counts.

Wednesday, September 06, 2017

Well-tossed word salad with French dressing

Bjorn put on a little conference last week in honour of our visit (or if you prefer alternative facts, we made sure our dates included the meeting). Mostly, it consisted of an up-to-date compendium of what people are currently doing in climate model development which was very interesting to us now we aren't based in a lab. Of course this work tends to be fairly incremental stuff so having a bit of a hiatus doesn't matter too much for those of us not actually working at the cutting edge of this stuff!

Interspersed between the science sessions were occasional “Cross-cutting invited presentations on the history, philosophy and sociology of Earth system science”. A fair chunk of this served only to reinforce my jaundiced view of the social sciences, but some was genuinely interesting and thought-provoking. In that respect, Wendy Parker presented an interesting discussion of the role of models and how we use them, arguing that rather than considering a model as a hypothesis to test (and which is inevitably false when examined in detail, as all models contain simplifications and approximations) we should instead form hypotheses about the adequacy of the model for a specific task, and aim to test these. I don't think this is revolutionary - surely many modellers already do think in these terms to some extent - but seeing it laid out explicitly in some detail was useful, I think.

There was some stuff about “consensus” and the role of the IPCC in contributing to the public understanding of science. I don't think this discussion really achieved a great deal. Thomas Stocker (who was present) gave a robust defence of the IPCC process but whatever your viewpoint, this is all largely irrelevant to the process of scientific research that most attendees focus on in their day jobs. I do think the IPCC is a bit of a dead weight sometimes, as an outsider it sometimes appears  that the authors consider their main role when responding to comments is to defend their first (public) draft against all criticism, and of course there's no independent editorial control over this (as there would be in most peer-reviewed publication). But on the other hand, even this supertanker can be observed to have slowly changed its direction over a period of time when we look back over a decade or so. Persistence, when combined with being right, generally wins out in the end. Thomas Stocker also presented some results from this analysis of the IPCC text from a readability point of view.  He was a bit apologetic about the fairly low scores achieved of 10-30 for the IPCC SPMs, though he did point out that not only was WG1 at the high end of this, but also the summary headline statements were generally significant easier reading than the texts. For calibration, scientific papers are around 40, quality papers 40 and tabloids 50 on the Flesch Reading Ease score that was used. 

The slide that motivated the blog title was a dense screed of text from one of the social scientists which, when I analysed the final sentence with the Flesch Reading Ease formula, achieved the notable score of -21.5. I think this probably means that it can be understood by no-one, even the author. However she did partially redeem herself by referring to the silly 1.5C limit stuff as fake science, relying as it does on fantasy technology that doesn't yet exist (at any meaningful scale). I'm still rather disappointed by the alacrity with which the IPCC has jumped at the opportunity to write a report on this, despite the utter futility of the exercise.

I see I haven't really commented on the science. Um, science was being done, by lots of people, in a number of different directions. A decade, in the context of “decadal prediction”, still means 2-5 years, this being about the limit of any sort of measurable (let alone useful) prediction skill. My decadal prediction is that they'll stop calling it decadal prediction in about another decade :-) There is lots of carbon cycle modelling which I have never really got that excited by. I know it's important for determining how the climate will change (as a function of emissions) but it still seems a bit ad-hoc and empirical to me. Paleoclimate research is increasingly valuable for testing models, though I'm not sure how it will all fit into the new IPCC chapter structure that is on the verge of being approved. But as above, that's not about the science per se but merely about how it's summarised. Should I say something more? Well, if anyone has a specific interest piqued by the program, ask away. I think presentations may also appear on the website at some time. By the way, having made a rather late decision to come, Jules and I were merely attendees with no presentations of our own.

Tuesday, September 05, 2017

Practice and philosophy of climate model tuning across six US modeling centers

Paper with the above title just appeared in GMD. Despite being a European English-language journal we welcome Americans and even Americanisms, so I'll quote the title as written rather than as it should be :-) In this paper, Gavin has nicely summarised (or perhaps I may say, summarized) how approaches to model tuning vary throughout the US climate science community.

It's a slightly unusual manuscript type for GMD in that it doesn't present any technical advances (such as parameter estimation techniques, examples of which have been published in GMD previously) but instead describes the rather more ad-hoc hand tuning that model developers currently do. As such it generated some behind-the-scenes discussion as to how best to handle the manuscript within the GMD framework. We at GMD have always seen our role as enabling rather than constraining the publication of modelling science, and were already considering the concept of “review” paper types which survey a field rather than notably advancing it, so this was an opportunity rather than problem for us. The reviewers also made constructive comments which my job as editor fairly straightforward.

A major point of interest in the paper (and in model tuning generally) is to what extent the models have or have not been tuned to represent the 20th century warming. This has significant implications for how we would interpret their performance and potentially use the observational data to preferentially distinguish between models. Gavin has always been quite insistent that he doesn't use these data:



and I certainly have no reason to doubt his claim. On the other hand, this Isaac Held post on tuning is also worth reading. In that post, Isaac Held argues that the warming is probably baked-in to some extent in the way that the models are built and evaluated during their construction. On balance I think I prefer Isaac's way of putting it to Gavin's, but it's a nuanced point. Certainly there is no question that modellers do not repeatedly re-run 20C simulations, tweaking parameter values each time until they get a good fit to the observed record. So if this is what people are envisaging when they discuss the topic of “model tuning” then Gavin is certainly correct, this simply doesn't happen. And I'm happy to believe that some modelling teams don't run the 20C simulation at all until the very end of the model development phase, and simply send their very first set of simulation results to the CMIP database. But I've seen for myself that some groups have sometimes done these simulations at an earlier stage, and on seeing a poor result, have gone back and redesigned some aspects of the model to fix the problems that have arisen (these are likely to be more specific than just “the wrong trend”). And even beyond this, it's a bit of an open question to what extent the tuning that is done to individual model components is truly independent of our knowledge about the recent warming which constrains our estimates of various aspects of model behaviour. But given the limited nature of any such tuning, (and indeed the limited agreement between models and data!) perhaps it's a close enough approximation to the truth to just call them untuned.

Saturday, September 02, 2017

Hamburg revisited

Yes we're in Hamburg again, which at least partially explains the ferry. We are back at the Max-Plank Institut für Meteorologie for another visit courtesy of Bjorn Stevens and Thorsten Mauritsen. Who is not Mauritian despite the attempts of my spellchecker to make him so! Going by ferry enabled us to bring bicycles (in the new stealth camper van) 



and rent an apartment a little way out of the centre without bothering too much with public transport (which is of course actually very good, but never as good as a bike), and also tied in with a brief visit to jules' brother and family who are conveniently located along the way in the Netherlands.


(Token photo of Dutch family cycling)

Wrong side driving in a right hand drive van with limited visibility wasn't anywhere near as fraught as I'd feared it might be and we got to Hamburg in one piece thanks in no small part to Google's navigation on our phones. Damn the evil interfering EU and their abolition of roaming charges! Pavement cycling on all the crazy lanes in Hamburg is considerably more challenging, I have adopted the habit of tucking in behind someone who seems to know what they are doing and just following their lead. Fortunately the hamburgers seem much more tolerant of each other than the british would be in similar circumstances.

So we are back in science mode and more blogging on random topics should be forthcoming. Hooray, I hear you shout. Well actually I didn't. But I'll do it anyway.