Tuesday, December 23, 2014

Climate capers at Cato

NOTE: The code and data used to produce all of the figures in this post can be found here.

Having forsworn blogging activity for several months in favour of actual dissertation work, I thought I'd mark a return to Stickman's Corral in time for the holidays. Our topic for discussion today is a poster (study?) by Cato Institute researchers, Patrick Michaels and "Chip" Knappenberger.

Michaels & Knappenberger (M&K) argue that climate models predicted more warming than we have observed in the global temperature data. This is not a particularly new claim and I'll have more to say about it generally in a future post. However, M&K go further in trying to quantify the mismatch in a regression framework. In so doing, they argue that it is incumbent upon the scientific community to reject current climate models in favour of less "alarmist" ones. (Shots fired!) Let's take closer look at their analysis, shall we?

In essence, M&K have implemented a simple linear regression of temperature on a time trend,
\begin{equation}
Temp_t = \alpha_0 + \beta_1 Trend + \epsilon_t.
\end{equation}
This is done recursively, starting from 2014 and incrementing backwards one year at a time until the sample extends until the middle of the 20th century. The key figure in their study is the one below, which compares the estimated trend coefficient, $\hat{\beta_1}$, from a bunch of climate models (the CMIP5 ensemble) with that obtained from observed climate data (global temperatures as measured by the Hadley Centre's HadCRUT4 series).



Since the observed warming trend consistently falls below that predicted by the suite of climate models, M&K conclude:  "[A]t the global scale, this suite of climate models has failed. Treating them as mathematical hypotheses, which they are, means that it is the duty of scientists to reject their predictions in lieu of those with a lower climate sensitivity."

Bold words. However, not so bold on substance. M&K's analysis is incomplete and their claims begin to unravel under further scrutiny. I discuss some of these shortcomings below the fold.

For starters, where are the confidence intervals? I don't mean the pseudo-intervals generated by the CMIP5 ensemble (those are really just a spread of the individual trend means across all of the climate models). I mean the confidence intervals attached to each trend estimate, $\hat{\beta_1}$. Any first-year statistics or econometrics students knows that regression coefficients come with standard errors and implied confidence intervals. Indeed, accounting for these uncertainties is largely what hypothesis testing is about: Can we confidently rule out that a parameter of interest doesn't overlap with some value or range? Absent these uncertainty measures, one cannot talk meaningfully about rejecting a hypothesis. I have therefore reproduced the above figure from M&K's poster, but now with 95% error bars attached to each of the "observed" HadCRUT4 trend estimates.


As we can see, there is complete overlap with these error bars and the ensemble range. M&K's bold assertion that we should reject climate models as failed mathematical hypotheses does not hold water. At no point can we say that the observed trend in temperature is statistically different from the trend predicted by the climate models. Note further that this new figure is actually conservative. It only depicts error bars attached to the trend in observed temperatures, not those from the climate models. The trends generated from regressing on each of these models will come with their own error bars. This will widen the ensemble range and further underscore the degree of overlap between the models and observations. (Again, with the computer models we have to account for both the spread between the models' mean coefficient estimates and their associated individual standard errors. This is one reason why this type of regression exercise is fairly limited -- it doubles up on uncertainty.)

The second major issue I have with this study is to do with M&K's choice of recursive regression model and fixed starting point. The starting point for each of these regressions is the present year, and 2014 just so happens to be an example of a year where observed temperatures fall below the model estimates. In other words, M&K are anchoring their results in a way that distorts the relative trends along the remainder of the recursive series. Now, you may argue that it makes sense to use the most recent year as your starting point. However, the principle remains. Privileging observations from any particular year is going to give you misleading results in climate research. Rather than a recursive regression, I would therefore argue that a rolling regression offers a much better way of investigating the performance of climate models. Another alternative is to stick with recursive regressions, but vary the starting date. This is what I have done in the figures below, beginning with 2005, then 2000 and 1995. (For sake of comparison, I keep the maximum trend length the same, so each of these goes a little further back in time.) The effect on the relative trend slopes -- and therefore the agreement between climate models and observations -- is clear to see.



1 comment:

  1. Good stuff and fairly clear that their result depends very strongly on their choice of end date (2014). So, we can expect Patrick Michaels and Chip Knappenberger to go "wow, good thing you pointed this out, thanks. We'll have to extend our analysis"...oh, wait, hold on....

    ReplyDelete