I have been reading up on the "hockey stick" debate, which refers to the graph of estimated global temperature featured not in the most recent, but the previous IPCC report (TAR--third assessment report).
As the graph above makes clear, the hockey stick was a decisive point in favor of the theory of anthropogenic global warming. (I.e. Al Gore is right.) The team responsible for the above graph was Mann et al., who published it in a 1999 paper.
However, the latest version of the IPCC report (AR4) doesn't feature the above graph. This is (I gather) due to the scathing critiques by co-authors economist Ross McKitrick and a sharp guy Steve McIntyre who (I think) is just a veteran of the mining industry.
Anyway, if you want to get more into the climate debate but you're not sure how to sift the evidence, and you don't want to waste your time with a bunch of blowhards one way or the other, then I suggest McKitrick's 18-page essay on, "What is the Hockey Stick Debate About?" (pdf). At times McKitrick's analysis gets a bit technical, but it's always in short bursts and you can suck it up and read it, and get the gist.
(In contrast, McIntyre's award-winning site ClimateAudit is a bit too detail-oriented for me. You could spend three hours there and not really come away with anything for the larger policy debate.)
Anyway, in the remainder of this post I want to (1) very briefly recapitulate some of McKitrick's best points and then (2) highlight a very interesting part of an official response to M&M at the "consensus" site RealClimate.
M&M couldn't reproduce Mann's hockey stick graph. They finally got their hands on his computer code, and realized why he was getting such striking results when they couldn't.
The problem was in how Mann was handling different series of temperature records. They had estimated temperatures for each year going way back, but they were drawing on multiple data sets. E.g. tree rings from one area covered a certain number of years, while in more recent times they have ocean buoys, and maybe even satellite data in there.
So the problem is, how to take all this different information, to come up with a single graph of temperature going back (and maybe with confidence bands). My understanding (which is surely a little simplistic) is that the following occurred:
When people in this field aggregate series that are in different units, they want to put the different types of data on the same footing. Typically they go through and subtract the mean and divide by the standard error, transforming each separate series into one with a mean of zero and a variance of 1.
Yet when Mann et al. did this transformation, for some reason they didn't subtract by the series mean and divide by the series standard error. Instead, they subtracted (from the whole series!) the mean of the 20th century portion of the series, and divided by the standard error of the 20th century data points.
Now for those series with no unusual trend in the 20th century, this difference in transformation didn't matter much. The 20th century mean was close to the mean of the whole series, etc., so the final product looked pretty similar to what the standard procedure would have yielded.
However, some of the series (for whatever reason) had sharp spikes in the 20th century. For these series, Mann et al.'s unconventional transformation would decenter the series and give them higher variances than the series with no spike.
(NOTE: I believe I have faithfully reproduced McKitrick's explanation, but as-is the above doesn't make sense to me. I would think the proxies with 20th century spikes ought to have transformations with lower variances, because you would be dividing through by a larger standard deviation of the 20th century sample. I'll see if McKitrick responds to emails...)
Now we're finally ready for the punchline: When putting all the different series ("proxies") into one master series, the researchers have to decide how much weight to give each individual proxy. Since the objective was to get a final, single series that replicated as much of the variance as possible in the proxies, the rule gave more weight to those (transformed) proxies with higher variance.
Ah, but this meant those series that had (for whatever reason) spikes in the 20th century, were given far more weight in Mann's finished graph than the proxies that had no unusual spike in the 20th century. It is not surprising, then, that Mann's finished graph looked the way it did.
In fact, M&M showed that if you used randomly generated data series, Mann's technique would generate a "pronounced" hockey stick 99% of the time!
I think that's enough for now. In a subsequent post I will explain the response from RealClimate, and why one portion of it is very ironic/revealing.
In the meantime, those of you who are mathematically inclined, please tell me if I'm hand-waving on anything in the above. When Crash Landing's readership exceeds 30 unique visitors per week, maybe I will be more anal and fully vet everything before posting. But for now, we're all friends here...
We have a real problem this Thanksgiving: "With a big turkey, you start running into some big problems. It takes longer to thaw ...
Declares LewRockwell.com : "All of this means that while the government has been artificially propping up the economy and 'stimu...
Is shaping up nicely .
The language won't die, but that doesn't mean the programmers won't ! Funny quote: '"Just because a language is 50...