Tuesday, June 3, 2014

Review of Chapter 3 (part 2), The Hockey Stick Illusion by A. W. Montford


In an attempt to see both sides of the debate in the question of human caused climate change, I am starting to read this book: The Hockey Stick Illusion: Climategate and the Corruption of Science (Independent Minds) by Andrew Montford

 As I read it, I will give a chapter by chapter review comprising quotes and commentary. So, if this is an area of interest, keep a watch.

All material quoted from the book is in italics. My comments are in plain text in brackets.

With his long experience of the mining industry, McIntyre was well equipped to get to the underlying truth of a compelling graphic like the Hockey Stick. In a posting on Climate Skeptics, he pondered some similarities between the work of a mining analyst and a climate auditor.

[A]n individual time-series has much the same function as a drill-hole. Where there is an ore-body (i.e. a significant ‘signal’), the information in the individual drill-holes is not subtle. Any analyst recommending a mining stock has to look at the drill holes – not just the compilations. The application of valid statistical methods to invalid data can result in fiascos like Bre-X [a famous mining scandal in which drill-hole results had been ‘improved’]. ‘Adjustments’ are always something to be suspected.

[This is a rather unfair comparison, at best, since the Bre-X scandal represented intentional fraud and Mann’s climate predictions are being discussed as a case of poor data and methodology. Of course, such a comparison does reveal one of the underlying assumptions of McIntyre and Montford, i.e. that Mann is part of a conspiracy of climate researchers trying to dupe the world into believing anthropogenic global warming is real so they can keep getting grant money to study it. This, being a central assumption of climate contrarians, in general, deserves a little more discussion.

For a conspiracy of this sort to be plausible, there must be at least a reasonable motive. The stated motive by climate contrarians is that climate researchers want to keep getting expensive grants. This is an odd motive if one understands how scientific grants are distributed. Competition for grants is not based on the particular outcome of past research, but rather on the quality of past research and its relevance to furthering scientific knowledge and understanding. Whether the results from climate research show anthropogenic warming or not will not affect the potential to get funding, as long as the published results stand up to scientific scrutiny.

Of course, the argument goes something like this: since essentially all the peer-reviewed papers on climate change are in agreement, this must be because there is a conspiracy. This contention is further buttressed by the contention that any paper that finds contrary evidence gets rejected by the peer-reviewed journals. The problem with using this evidence to prove there is a conspiracy, is that it can just as easily be interpreted that the reason most of the published papers support anthropogenic warming is that the evidence actually supports that interpretation, and that the papers that are rejected are rejected on technical grounds, i.e. they are poorly done or are based on insufficient data.

In order to distinguish between these two possibilities it might be useful to follow the money. Sure, climate researchers stand to get grants if they successfully carry out and publish their research, but continuing to get grants is not dependent on the specific outcome of the research, either pro or con anthropogenic warming, as long as the research is well done and passes peer review. Most of the contrarian papers have been written by scientists outside of the climate science profession, many of which have received funding from fossil fuel linked sources. These kinds of fund sources do have a definite stake in the conclusions of the research they fund. The fossil fuel industry stands to lose trillions of dollars if anthropogenic warming is taken seriously. So, of the two sides in this controversy, which would be more likely to be a part of a conspiracy? Who stands to gain the most from a particular outcome in the climate controversy? If there is a conspiracy, it would seem that the fossil fuel industry stands to lose the most, not climate scientists. Regardless of a conspiracy or not, the case should be decided by scientific evidence.]

As he got his hands on more and more proxy data, McIntyre became frustrated by the fact that most of the proxies stopped at around 1980. This meant that the dramatic warming of the 1980s and 1990s, which should have vastly inflated the ring widths, couldn’t be seen. As he tartly observed:

If the IPCC were a feasibility study for a mere $1 billion investment in a factory or a mine, you can be sure that the engineers would bring all this type of data up to date. The casualness of the IPCC process in respect to not bringing the data up to date (but relying on it for sales presentations) is really quite awe-inspiring.

[I am not even sure how to evaluate these statements. Why would proxies be used when actual measurements of temperature are so freely available. The very point of proxies is that they enable estimation of historical temperatures when and where there were no adequate, direct measurements. The closer to the present data were gathered the more they represent direct measurements. In fact, direct measurements are used to calibrate the proxy estimates. So, far from being a fault, lack of proxies from the 80s and 90s, they simply reflect the availability of more accurate temperature data.

Proxies are also used for calibration, but there was really no need to have more recent proxies than 1980, considering that very good temperature data and proxies were available for much of the 20th century, more than adequate for calibration of the older proxies.]

After a couple of weeks, and following a few gentle reminders, an email from Scott Rutherford popped into McIntyre’s inbox, indicating that the proxy data was now available on Rutherford’s FTP site at the University of Virginia.

[Much of the remaining portion of this chapter is devoted to thoroughly trashing Mann’s data. Unfortunately, as noted in the review of the first half of the chapter, McIntyre appears to have misunderstood the format of the data. Reading the extended description McIntyre’s findings makes me almost wonder whether it goes beyond a mere misunderstanding. Some of the criticisms seem over the top. The only way to actually believe even a part of his criticisms, one needs to make some rather astounding assumptions. First, one must assume that Mann is an abysmally sloppy researcher, given the utter mess described in McIntyre’s analysis of the data. If the data were as much of a mess as McIntyre claims, then other climate researchers, presumably more careful and skilled than Mann, would have found results contradictory to those of Mann. Instead, since the publication of MBH98 and MBH99, at least a dozen other climate papers have reconfirmed the conclusions found in Mann’s papers. Given that these other studies, in some cases used entirely, or substantially, different data sets, the only way to dismiss these additional studies would be to either assume that all climate researchers play fast and loose with their data, or that they are all co-conspirators willing to lie outright to support Mann’s hockey stick. I realize this is what some climate contrarians try to insist, but this is an unsubstantiated conspiracy theory, at best.

Second, one must assume that McIntyre, in contrast, is an extremely careful and skilled statistician and that he has a thorough grasp of the calculations required to perform temperature reconstructions. Considering that McIntyre has no formal training whatsoever in climate science, this latter assumption seems unwarranted. He can claim what he likes about himself and his skills, but the fact that he lacks both the relevant training and publication history, suggests he is likely a less reliable source than Mann.]

Now that the proxy database had been cleaned up, all the mistakes corrected, and up-to-date data collected, putting together a recalculation of the reconstructed temperature should have been easy, but with Mann’s description of his methods being so vague, it was still a hard task to work out exactly what he’d done.

[Given that others who have accessed the same data have found none of the grave errors identified by McIntyre, it is difficult to believe that his reconstitution of the data can be trusted. Even more outrageous is his accusation here that Mann’s description of methods is vague. This is claimed only as an untrained and inexperienced climate researcher could, as Mann’s methods are about as standard as they come in climate circles. When I first looked at Mann’s methods, I too was a bit baffled, not being a climate researcher myself. After noting Mann’s citations concerning his methods, and checking many of those sources, it appears to me that his methods are pretty clearly outlined. That McIntyre finds Mann so vague just belies his ignorance of the relevant methodology, which is no fault of Mann’s. I will agree, though, that it did take a bit of digging to figure out how some of the statistics were computed, but this, again, was due to my own ignorance concerning the nuts and bolts of temperature reconstruction.]

With a good approximation of Mann’s methodology at hand, McIntyre now reached the moment of truth. How would correcting all the errors in the database affect the results? McIntyre pointed the program at the corrected data and set the calculations in train. In a minute he had the answer: when he saw the results, it was clear that the hunches he’d had when looking through the graphs of the proxies were entirely borne out. With the database corrected, the handle of the Hockey Stick was warped – that is to say, there was a pronounced Medieval Warm Period. In fact the temperatures of the reconstructed fifteenth century were even higher than those reached in the twentieth century.

[The best I can say in response to this closing salvo is to quote from The HockeyStick and the Climate Wars:

“The paper’s dramatically different result from ours—purporting an extended warm period during the fifteenth century that rivaled late-twentieth-century warmth—was instead an artifact of the authors’ having inexplicably removed from our network two-thirds of the proxy data we had used for the critical fifteenth–sixteenth-century period.”

So, it appears that the very thing Mann is accused of, leaving out data and cherry-picking, is done by McIntyre. Even had McIntyre’s work had some credence, it does not explain why every other major climate study by other researchers came to the same general conclusions as Mann. Additionally, even if McIntyre’s reconstruction of temperatures was able to show that the MWP was as warm, or maybe warmer, than recent warming, it would not negate the significance of the recent warming. Since the MWP was caused by factors other than rising CO2, what does it matter that it was as warm as recent times? In some sense that might even be a scarier result. If the same factors that caused the MWP were to occur, with CO2 also rising, our predicament would be even worse, and the need to reduce greenhouse gas emissions even greater. At best, McIntyre’s conclusions, even if correct, are little more than a distraction from the documented warming of the past several decades.

Just as for the first half of Chapter 3, here are grammatically incorrect uses of the word data in the last half of the chapter:]

So, according to Rutherford, and somewhat contrary to what Mann had said, the data wasn’t even in one place.

After a couple of weeks, and following a few gentle reminders, an email from Scott Rutherford popped into McIntyre’s inbox, indicating that the proxy data was now available on Rutherford’s FTP site at the University of Virginia.

[To be fair, here is one case where Montford used the word data correctly:]

At the topmost rows of the file were the data from the oldest proxies, the first starting in the year 1400.

Friday, May 30, 2014

Review of Chapter 3 (part 1), The Hockey Stick Illusion by A. W. Montford


In an attempt to see both sides of the debate in the question of human caused climate change, I am starting to read this book: The Hockey Stick Illusion: Climategate and the Corruption of Science (Independent Minds) by Andrew Montford

 As I read it, I will give a chapter by chapter review comprising quotes and commentary. So, if this is an area of interest, keep a watch.

All material quoted from the book is in italics. My comments are in plain text in brackets.


There had been a great deal of excitement on the forum in recent months. A new study by two Harvard astrophysicists, Willie Soon and Sallie Baliunas, had just been published and its appearance had caused a huge furore in the world of paleoclimate [Soon W, Baliunas S. Proxy climatic and environmental changes of the past 1000 years. Climate Research. 2003; 23: 89–110]. Soon and Baliunas had reviewed a large dataset of paleoclimate proxies to see how many showed the Medieval Warm Period, the Little Ice Age and the modern warming. They had concluded that the Medieval Warm Period was in fact a real, significant feature of climate history. The paper had been extremely controversial, contradicting the mainstream consensus that the Medieval Warm Period was probably only a regional phenomenon. Climatologists from around the world had fallen over themselves to attack the Soon and Baliunas paper, mainly on the grounds that many of the proxies used in the study were precipitation proxies rather than temperature proxies. So great was the uproar, in fact, that several scientists resigned from the editorial board of Climate Research, the journal which had published the paper in the first place. In the face of all this opposition, the paper had gained little traction in terms of changing mainstream scientific opinion on the existence of the Medieval Warm Period. It had been a huge disappointment for the sceptic community.

[There are reasons the Soon-Baliunas paper was controversial, but it was not because of the Medieval Warm Period (MWP), which all climate researchers accept as having occurred, but rather the poor quality of their analysis and the claim that the MWP saw higher mean temperatures than the recent warming trend . Mann characterizes the problems well in The Hockey Stick and the Climate Wars: Dispatches from the Front Lines:

“The Soon and Baliunas study claimed to contradict previous work—including our own—that suggested that the average warmth of the Northern Hemisphere in recent decades was unprecedented over a time frame of at least the past millennium. It claimed to do so not by performing any quantitative analysis itself, but through what the authors referred to as a “meta-analysis”—that is to say, a review and characterization of other past published work.

“A fundamental problem with the paper was that its authors’ definition of a climatic event was so loose as to be meaningless. As Richard Monastersky summarized it in his article, “under their method, warmth in China in A.D. 850, drought in Africa in A.D. 1000, and wet conditions in England in A.D. 1200 all would qualify as part of the Medieval Warm Period, even though they happened centuries apart.” In other words, their characterization didn’t take into account whether climate trends in different regions were synchronous. The authors therefore hadn’t accounted for likely off setting fluctuations—the typical sort of seesaw patterns one often encounters with the climate, where certain regions warm while others cool.

“An additional problem with the study is readily evident from Monastersky’s characterization above. Rather than assessing whether there was overall evidence for widespread warmth, the authors were asking a completely different, practically tautological question: Was there evidence that a given region was either unusually warm, or wet, or dry? The addition of these two latter criteria undermined the credibility of the authors’ claim of assessing the relative unusualness of warmth during the medieval period. These two criteria—were there regions that were either wet or dry—could just as easily be satisfied during a global cool period!

“A third problem is that the authors used an inadequate definition of modern conditions. It is only for the past couple of decades that the hockey stick and other reconstructions showed warmth to be clearly anomalous. Many of the records included in the Soon and Baliunas meta-analysis either end in the mid-twentieth century or had such poor temporal resolution that they could not capture the trends over the key interval of the past few decades, and hence cannot, at least nominally, be used to compare past and present.

“There was yet a fourth serious problem with the Soon and Baliunas study. The authors in many cases had mischaracterized or misrepresented the past studies they claimed to be assessing in their meta-analysis, according to Monastersky. Paleoclimatologist Peter de Menocal of Columbia University/LDEO, for example, who had developed a proxy record of ocean surface temperature from sediments off the coast of Africa, indicated that “Mr. Soon and his colleagues could not justify their conclusions that the African record showed the 20th century as being unexceptional,” and told Monastersky, “My record has no business being used to address that question.” To cite another instance, David Black of the University of Akron, a paleoclimatologist who had developed a proxy record of wind strength from sediments off the coast of Venezuela, indicated that “Mr. Soon’s group did not use his data properly”; he told Monastersky pointedly: “I think they stretched the data to fit what they wanted to see.””

and from the same source:

“John Holdren, the Heinz professor of environmental policy who went on to become president of the American Association for the Advancement of Science (AAAS) and presidential science adviser in the Obama administration, voiced the opinion27 that “The critics are right. It’s unfortunate that so much attention is paid to a flawed analysis, but that’s what happens when something happens to support the political climate in Washington.””

Another critique of the controversy is in Wikipedia: http://en.wikipedia.org/wiki/Soon_and_Baliunas_controversy]

McIntyre was one of the mainstays of the Climate Skeptics site, posting comments on a wide array of subjects. In recent weeks he’d spent a great deal of time discussing radiative physics, trying to understand how the IPCC came up with an expected temperature rise of 2.5°C every time atmospheric carbon dioxide doubled. He’d not really got anywhere with it so far (and in fact it remains a mystery to this day), but he was far from giving up hope. If there was an explanation to be had, he fully expected to find it.

[Is it possible that retired geologist (McIntyre) and Montford just don’t accept the explanations commonly found in the literature. For example, this, from Global Temperature Change, Hansen, et al. (2006):

“In assessing the level of global warming that constitutes DAI [dangerous anthropogenic interference], we must bear in mind that estimated climate sensitivity of 3 ± 1°C for doubled CO2, based mainly on paleoclimate data but consistent with models, refers to a case in which sea ice, snow, water vapor, and clouds are included as feedbacks, but ice sheet area, vegetation cover, and non-H2O GHGs are treated as forcings or fixed boundary conditions. On long time scales, and as the present global warming increases, these latter quantities can change and thus they need to be included as feedbacks. Indeed, climate becomes very sensitive on the ice-age time scale, as feedbacks, specifically ice sheet area and GHGs, account for practically the entire global temperature change.”

The above quote is by no means the only place where such explanations are to be found, and also notice that it is not accurate to set a specific value, such as 2.5°C without also incorporating the degree of uncertainty. According to Hansen, et al. (2006) the range is from 2-4°C, and if you read the paper in more detail the authors also admit that, although much less likely, it could be lower or higher than this range. Part of how Montford mischaracterizes the debate is to always lowball the numbers, seeming unwilling to see much, if any change in temperature due to increasing CO2. If he would at least engage in an honest debate it would be better.]

While McIntyre’s readings in climatology broadened, he also began discussing the IPCC’s claims of unprecedented warmth with friends and acquaintances. His contacts in the mining industry were particularly interesting on the subject. Familiar as they were with the long-term history of the Earth, many of the geologists McIntyre spoke to had strong opinions on claims that recent temperatures were unprecedented and most were highly sceptical of the idea. When it came to the Hockey Stick itself, mining people –geologists, lawyers and accountants –were openly contemptuous. Hockey sticks were a well known phenomenon in the business world, and McIntyre’s contacts had seen far too many dubious mining promotions and dotcom revenue projections to take such a thing seriously. The contrasting reactions to the Hockey Stick of politicians and business people –on the one hand doom-laded predictions of catastrophe and on the other open ridicule –acted as a spur to McIntyre, who flung himself headlong into the world of climatology.

[Two things here: First, geologists and geochemists, as a group tend to automatically be skeptical of climate science because they are not typically trained in the disciplines needed to fully understand climate modeling, and they are typically employed by an industry that sees signs of environmental problems as a threat to their profession. This is especially true of those geologists employed by the fossil fuel industry, which stands to lose a lot of revenue if politicians decide we must respond to climate change by reducing carbon emissions. Secondly, to question the hockey stick just because it resembles something you have seen in a business presentation in a boardroom seems like a poor reason to question the validity of the data supporting its construction. Of course, this is just what led him to approach Mann’s work with skepticism, not his stated reason for rejecting Mann’s work later, but it does suggest an interesting, non-scientifically based bias.]

Within a matter of days of his announcement, McIntyre was posting findings to the Climate Skeptics forum. He had now worked through Mann’s explanation of his methodology and he had soldiered his way through the matrix algebra. It was still very strange. The use of PC analysis was new in the realm of paleoclimate and Mann had made no attempt to prove the validity of the technique in the field, instead relying on a bold assertion that it was better than the alternatives. In view of this and given the surprising results–with no Medieval Warm Period or Little Ice Age visible in the reconstruction–one might have expected that experts in the field would have questioned whether Mann’s novel procedures might have been a factor in his anomalous results. But despite a thorough search of the literature, there was no sign that anyone else had seen fit to probe the issue further. Nor had any other researchers adopted Mann’s methodology in the five years since his paper had been published. Given how often the Hockey Stick had been cited in the scientific literature, these were very surprising observations, which seemed to suggest that paleoclimatologists liked Mann’s results rather more than they liked his methodology.

[Is Montford just being dishonest, or did he not bother to even check the MBH99 paper (Mann, et al., Northern hemisphere temperatures during the past millennium: Inferences, uncertainties, and limitations). All one has to do is read that paper to realize two things: First, Mann only estimated temperatures back to 1400, which is just after the MWP, so of course he left it out of the hockey stick figure. He even explains in the paper why temperatures before 1400 were not presented, i.e. because the data were inadequate. He even mentions the MWP, but because he suggests that the peak temperatures during that period only “approached” mean 20th century levels, Montford (and McIntyre) interpret that to mean that Mann has done away with the MWP. He has not done away with it, he is just saying that its peak temperatures were not as high as Montford would like them to be, i.e. higher than any time in the recent past or the present.

Secondly, Mann does include the Little Ice Age (LIA) in the hockey stick figure and even mentions it, just not by name, which apparently is interpreted by Montford as being absent. If you don’t agree with my two statements here, read the paper for yourself, and here is what they say in their conclusions:

“Although NH reconstructions prior to about AD 1400 exhibit expanded uncertainties, several important conclusions are still possible. While warmth early in the millennium approaches mean 20th century levels, the late 20th century still appears anomalous: the 1990s are likely the warmest decade, and 1998 the warmest year, in at least a millennium. More widespread high-resolution data which can resolve millennial-scale variability are needed before more confident conclusions can be reached with regard to the spatial and temporal details of climate change in the past millennium and beyond.”

To top it off, note the tentative nature of his statements in the conclusion, in contrast to Montfort who is just plain certain that Mann is wrong.

Montford also claims that no other researchers were using the methods of Mann in the five years following MBH99. Apparently Montford either did not check the literature very thoroughly, or his definition of researchers having not “adopted Mann’s methodology” so narrow that unless they did exactly what he did, Montford would assume they had not “adopted Mann’s methodology.” The odd thing about this is that the techniques used by Mann, for the most part, were standard, accepted methods used by most climate researchers. In just a brief search of the literature from 1999-2004 I found several papers that appear to have used the general approach used by Mann. Besides, there are also studies using alternative methods that found largely the same results as Mann during this same time period.]

Another issue was also attracting McIntyre’s attention. During his calibration exercise, Mann had assessed how well the temperature data matched up against the proxies by calculating various statistical measures–in other words, numbers that acted as a score of how good the match was. The main way he did this was using a measure that he called the beta (β), which he described as being ‘a quite rigorous measure of the similarity between two variables’.

This was a somewhat surprising choice since the beta statistic was virtually unheard of outside climatology circles. (It also goes by the names of the ‘resolved variance statistic’ or the ‘reduction of error (RE) statistic’–the latter being the term we will use to refer to it henceforward.) With his experience in statistics, McIntyre was aware that there was great danger in using novel measures like these, whose mathematical behaviour hadn’t been thoroughly researched and documented by statisticians. The statistical literature was littered with examples where particular statistical measures gave results which misled in certain circumstances. Mann had left no clue as to why he had preferred the RE rather than the more normal measures of correlation, such as the correlation (r), the correlation squared (R2) or the CE statistic. The behaviour of all of these measures under a wide range of scenarios was well documented, so McIntyre was surprised not to see an explanation.

[There is a mixture of truth and deception in these two paragraphs. It is true that RE is used within climatology circles and has in fact been used clear back into the 1950s, and although I cannot confirm Montford’s point that RE is ONLY used in climatology circles and not elsewhere, I am not sure why this would make any difference anyway. This statistical method was developed by Lorenz (1956, Empirical orthogonal functions and statistical weather prediction) and is referred to in standard climatology textbooks and scholarly books, for example, Methods of Dendrochronology: Applications in the Environmental Sciences by E.R. Cook and L.A. Kairiukstis (1990) says this:

4.3.4. Reduction of error


“The reduction of error (RE) statistic provides a highly sensitive measure of reliability. It has useful diagnostic capabilities (Gordon, 1980) and is similar, but not equivalent, to the explained variance statistic obtained with the calibration of the dependent data (Lorenz, 1956; 1977). Therefore, RE should assume a central role in the verification procedure. The equation used to calculate the RE can be expressed in terms of the ŷi, estimates and the yi predictions that are expressed as departures from the dependent period mean value:
(4.39)
  “The term on the right of (4.39) is the ratio of the total squared error obtained with the regression estimates and the total squared error obtained using the dependent period mean as the only estimate (Lorenz, 1958, 1977; Kutzbach and Guetter, 1980). This average estimation becomes a standard against which the regression estimation is compared. If the reconstruction does a better job at estimating the independent data than the average of the dependent period, then the total error of the regression estimates would be less, the ratio would be less than one, and the RE statistic would be positive.”


“Two verification statistics are presented here that were common to all of the recon­structions: the product-moment correlation coefficient and the reduction of error statistic. Each statistic is commonly used in dendroclimatic reconstructions. The product-moment correlation coefficient (r) is a parametric measure of association between two samples. Its use in testing for hypothesized relationships between variables is described in virtually all basic statistics texts and in Fritts (1976). The reduction of error (RE) statistic is less well known. It was developed in meteorology by Lorenz (1956) for the purpose of assessing the predictive skill of meteorological forecasts. The RE has no formal significance test. but an RE>0 is an indicator of forecast skill that exceeds that of climatology (i.e. extrapolating the climatic mean as the forecast or prediction). See Fritts (1976) Gordon and LeDuc (1981) and Fritts and Guiot (1990) for full descriptions of this statistic. its small sample properties, and other verification tests as well.”

So, why is Montford surprised that Mann would use a statistic that is widely used by climate scientists? Additionally, Montford faults him for using RE and not r or R2, which are much more widely used. In most papers by Mann, and follow-ups to MBH99, he includes not just RE, but also one or both of the latter. When both RE and r2 are reported side by side as they are in Zhang, Mann and Cook (2003, Alternative methods of proxy-based climate field reconstruction: application to summer drought over the conterminous United States back to AD 1700 from tree-ring data), RE is the more conservative statistic, causing the rejection of more data than r2. This runs counter to the apparent concerns of Montford (although his exact concerns are not clear) that Mann is including bad data in his analyses. Since RE is more conservative, Mann is more likely to have left out some potentially good data, and most certainly to have excluded any bad data that would also have been excluded by r2.]

Mann indicated in the paper that the r and R2 had also been calculated, which might have provided some reassurance to McIntyre but for the fact that the results of these calculations were not presented for the calibration step anywhere in the paper or in the online supplementary information. However, by now McIntyre had got hold of the data for the second Hockey Stick paper, MBH99–the extension back to the year 1000–so he was able to start to make some significant progress in answering some of these questions. Because the number of proxies used in MBH99 was so small (there being very few proxies that extended so far into the past) it was a relatively straightforward task for McIntyre to recreate Mann’s calibration and to calculate some of the correlation statistics for himself. The results were eye-opening, to say the least. As he reported to the climate sceptics:

The R2 . . . ranges from –0.006 to 0.454; on this basis, only 2 of 13 proxies have R2 adjusted over 0.25, and 7 of 13 have values under 0.1 . . .

To put this in perspective, R2 will normally vary between 0 and 1. A score of 0 indicates that there is no correlation at all, and 1 indicates perfect correlation. So what McIntyre was seeing was that the proxies and the temperature PCs didn’t really match up very well, according to a standard measure of correlation. The best among them were not even halfway good, and some simply showed no correlation at all. Could this explain why Mann was so enthusiastic about the RE statistic, the climatologists’ own measure of correlation?

[This would be pretty damning stuff and pretty surprising to be found in data used in a peer-reviewed scientific paper. How could this happen? Well, what is not done in this book is any update to McIntyre’s work in light of later findings. First, why should we trust that McIntyre, who does not do these kinds of statistics routinely like most climate scientists do, to produce more accurate results than Mann? If there is a discrepancy between Mann’s and McIntyre’s results, shouldn’t we suspect that McIntyre is the one making the mistakes.

In reference to the above claims, Mann, in his book The Hockey Stick and the Climate Wars. says this in reference to the above criticisms leveled by McIntyre on Montfort’s climate blog:

“To be specific, they claimed that the hockey stick was an artifact of four supposed “categories of errors”: “collation errors,” “unjustified truncation and extrapolation,” “obsolete data,” and “calculation mistakes.” As we noted in a reply to a McIntyre and McKitrick comment on MBH98 that had been submitted to and rejected by Nature (because their comment was rejected anyway, our reply would not appear there either), those claims were false, resulting from their misunderstanding of the format of a spreadsheet version of the dataset they had specifically requested from my associate, Scott Rutherford. None of the problems they cited were present in the raw, publicly available version of our dataset, which was available at that time at ftp://holocene.evsc.virginia.edu/pub/MBH98/.”

What also is overlooked by Montford, is that if you do actually take Mann’s correct data, and the RE and r2 values found in his papers that contain both statistics, that for the vast amount of his data, both values agree on what constitute good and bad data. Where data are rejected by one measure and not the other, the rejections are due to the RE statistic. Considering that this is a revised edition of the book, republished in 2011, it is dishonest of Montford to have not corrected these problems, or at least addressed them, rather than perpetuating false and incorrect critiques of MBH98 and MBH99.]

[Further Examples of Grammatically Incorrect Use of the Word “Data:”]

He could see that Mann had used a network of 112 proxy series, and in fact behind the scenes there was even more data than this.

The data that Mann used was the CRU’s best stab at what the actual temperatures had been for the previous 150-odd years, and as we’ve noted, CRU’s data was reckoned to be the best.

He also tried regressing nineteenth century proxy data against twentieth century temperatures and found no great difference in the R2 score to those achieved when the correct proxy data was used.

On an even simpler level, there was a great deal about the data used in the MBH99 reconstruction that was peculiar.

[It looks as if Montford’s grammatically incorrect use of the word data is consistent, given that four more cases occur in the first half of chapter 3. He needs more than just a better editor, he needs someone versed in scientific writing to help him.]

Thursday, May 29, 2014

Review of Chapter 2, The Hockey Stick Illusion by A. W. Montford


In an attempt to see both sides of the debate in the question of human caused climate change, I am starting to read this book: The Hockey Stick Illusion: Climategate and the Corruption of Science (Independent Minds) by Andrew Montford


 As I read it, I will give a chapter by chapter review comprising quotes and commentary. So, if this is an area of interest, keep a watch.


All material quoted from the book is in italics. My comments are in plain text in brackets.


One problem that occurs in calibration is that any relationship we might have been able to calculate could have arisen purely by chance. In other words, just because tree ring width multiplied by ten happened to be equal to temperature in the twentieth century, that doesn’t mean that it was always that way. For example, twentieth century ring widths might have been lower than normal, say because of insect infestation in that particular set of trees, or maybe even in trees in general. Another possibility is that the tree isn’t actually responding to temperature at all, but to something else. If any of these issues were really affecting tree growth, it might be that the normal relationship is actually an extra 0.15 mm of growth from a rise in temperature of 1°C.

[This is nothing new and climate researchers have already spent considerable effort addressing the problem. Montford is just grasping at straws here. As just one example of the ongoing research in this area, see Dendroclimatology: Progress and Prospects edited by M. K. Hughes, Thomas W. Swetnam, and Henry F. Diaz (2011)]

If you have a large number of proxies, there are two main ways in which you can go about the calibration.26 The first of these has been described as ‘the Schweingruber method’, or composite-plus-scale (CPS), and involves taking proxies that are expected to be temperature sensitive, calibrating them against local temperatures and essentially taking an average. The other way, the ‘Fritts method’, or climate field reconstruction, involves taking lots of proxy series, which are sometimes not even responding to their local temperatures, and seeing if some sort of correlation can be found with temperature measurements somewhere in the wider vicinity. What emerges from this latter method is essentially a weighted average of the full proxy set, with the temperature sensitive proxies having a much higher weight than the non-temperature-sensitive ones. It’s this Fritts method that is the relevant one for our story. The Fritts method involves a certain leap of faith to trust that trees that are not responding to their own local temperature can nevertheless detect a signal in a wider temperature index. You have to believe in the existence of something called ‘teleconnections’, whereby temperatures in a possibly distant part of the world affect the climate in the locale of the tree in such a way as to affect its growth, and in a consistent manner. If this sounds implausible to you, then you are not alone. However, the reality of the mechanism is accepted by the paleoclimate community and for the purposes of our story that’s what you need to know.

[Firstly, The work of Fritts (Tree Rings and Climate, 1976) has stood the test of time and his methods have been tested and retested by literally hundreds of scientists. Seems rather arrogant for an armchair climate “expert” to sweep such work aside so cavalierly. His innuendo concerning the silliness of “teleconnections” is beyond the pale. Teleconnections are not some hocus pocus. There is more than a half century of empirical evidence for them, and there are some good hypotheses about the underlying atmospheric processes. As an example, here is a quote from The Global Climate System: Patterns, Processes, and Teleconnections by Howard A. Bridgman and John E. Oliver:

“Teleconnection is a term used to describe the tendency for atmospheric circula­tion patterns to be related, either directly or indirectly, over large and spatially non-contiguous areas. The AMS Glossary of Weather and Climate (Geer 1996) defined it as a linkage between weather changes occurring in widely separated regions of the globe. Both definitions emphasize a relationship of distant pro­cesses. However, the word "teleconnection" was not used in a climate context until it appeared in the mid 1930s (Angstrom 1935), and even until the 1980s was not a commonly used term in the climatic literature.

“As stressed throughout this book, teleconnections are often associated with atmospheric oscillations. Any phenomenon that tends to vary above or below a mean value in some sort of periodic way is properly designated as an oscillation. If the oscillation has a recognizable periodicity, then it may be called a cycle, but few atmospheric oscillations are considered true cycles. This is illustrated by the early problems in predicting the best-publicized oscillation, the Southern Oscillation and El Niño (Chapter 2). Were this totally predictable then many of its far-reaching impacts could be forecast.”]

The PCs are often described as being like the shadow cast by a three-dimensional object. Imagine you are holding an object, say a comb, up to the sunlight, and it is casting a shadow on the table in front of you. There are lots of ways you could hold the comb, each of which would cast a different shadow onto the table, but the one which tells you the most about the object is when you expose the face of the comb to the light. When you do this, the sun passes between the teeth and you can see all the individual points. You can tell from the shadow that what is being held up is a comb. This shadow is analogous to the first PC. Now rotate the comb through a right angle, so that you are pointing the long edge of the comb to the sun. If you do this, the shadow cast is just a long thin line. You can see from the shadow that you are holding a long thin object, but it could be just about anything. This would be the second PC. It tells us something about the object, but not as much as the first PC. You can rotate through a right angle again and let the sunlight fall on the short edge of the comb. Here the shadow is almost meaningless. You can tell that something is being held up, but it’s impossible to draw any meaningful conclusions from it. This then, is the third PC.

[Montford spends considerable time in this chapter explaining PCA (Principal Components Analysis) in lay terms. This is a worthy undertaking, but some of his description is a bit of an oversimplification. The “shadow” metaphor is a crude way to explain what PCA results mean. At least he makes some mention of variance, considering that it is variance that is being partitioned, i.e. what proportion of the variance can be explained by each component. It also is a bit annoying for him to abbreviate PCA as just PC, when PCA is the standard abbreviation used in most publications.]

And that’s it: that’s all you need to know. Throughout the months and years of bitter argument over Mann’s Hockey Stick, this simple step was the only part of the PC analysis that was in dispute. For the purposes of the story it is not really necessary to understand anything else about how the subsequent calculations work. You can think of PC analysis as a big black box which takes the centred data and churns out as many patterns as are felt necessary. It is, however, useful to understand just a little of the detail of what happens to the centred data, as it will help explain just why centring is so important.

[This comes after Montford describes what centering is, a data transformation method used in the PCA analysis by Mann. What he forgets to point out, is that although there was a lot of attention paid to this detail of Mann’s analysis, the results are not even dependent on that approach. More than a dozen independent analyses since Mann’s first hockey stick, using a variety of different approaches and data sets, have produced results largely the same. This seems to be just another example of how climate change contrarians pick some minor detail that they then consider a fatal defect to the interpretation of the results.]

P. S.

[I found a second case of the incorrect use of the word "data" in this chapter (and more in Chapter 3 which I am now reading), which leads me to believe that Montford really is not a scientist. I know, I know, some will accuse me of being nitpicky, but careful analysis and correct writing go hand in hand in science, and not learning that "data" is a plural word, and then repeatedly using it as a singular when discussing scientific research shows a degree of sloppiness and lack of scientific professionalism. Here is the quote, at any rate:]


"It is important to realise, however, that this result is only achieved if the data is first centred. Because of this, centring is considered an integral part of PC analysis."

Review of Chapter 1, The Hockey Stick Illusion by A. W. Montford

In an attempt to see both sides of the debate in the question of human caused climate change, I am starting to read this book: The Hockey Stick Illusion: Climategate and the Corruption of Science (Independent Minds) by Andrew Montford

 As I read it, I will give a chapter by chapter review comprising quotes and commentary. So, if this is an area of interest, keep a watch.



My primary purpose in reading and reviewing this book is to critically assess the arguments against the theory that humans are the primary cause of climate change over the past 100 years. Since this book is often cited as the primary and best case against the status quo among climate researchers, I chose to read it.

That this is the best case against human caused climate change seems somewhat surprising, in that the author is not a trained climate researcher. In fact, he only has an undergraduate degree in chemistry, and is a chartered accountant in the UK. His entire qualification that gives him credence is that he has followed the controversy and written on it extensively. Since he has apparently not written anything in the peer-reviewed scientific literature, he does not seem to be very well qualified to enter the debate. Still, since his work is so often cited by climate change critics in their defense, I will try and keep an open mind.

All material quoted from the book is in italics. My comments are in plain text in brackets.

The conference was instructed to review the state of knowledge of climatic change and variability, due both to natural and anthropogenic causes, and also to assess what this meant for humankind. In the way that bureaucracies sometimes do, however, the scientists actually did something slightly but tellingly different to what they had been asked to do. Rather than simply assess the state of scientific knowledge and consider what might happen in the future, they set out the steps they thought policy makers should take in a ‘Call to Nations’ that was issued at the end of the conference. This statement called for full advantage to be taken of man’s knowledge of climate, for steps to be taken to improve that knowledge, and for potential manmade changes to climate to be foreseen and prevented. This then was not merely a call for more research, but also a demand for a particular policy outcome – prevention rather than adaptation. One can almost detect the germ of a idea forming in the minds of the scientists and bureaucrats assembled in Geneva: here, potentially, was a source of funding and influence without end. Where might it lead?

[Right at the outset Montford is already assuming the motives of the climate research community is greed. Not a good sign of objectivity. Climate research will not stand to gain more by finding human activity as a major contributor to climate change. In fact, if they really wanted to maximize the money they could get, they should keep calling for more study because their results are so uncertain. Nudging society to assess the sociological and economic consequences of climate change would hardly seem like a way to get more funds. In fact, it would more likely dilute what funds they were getting, because policy studies would require funds too, and climate scientists themselves would not be the one to reap those “benefits.”]

The exact origins of the chart presented by the IPCC were, at the time, obscure; rather strangely, the report did not contain a citation or other indication of its authorship. Although it appeared to be a schematic or cartoon rather than a proper graph, it must have had some basis in scientific research, but quite what this basis was not discovered until many years later when it was shown to be derived from the work of a British climatologist called Hubert Lamb.8 Lamb, while an important scientist, was born in 1913 and the chart turns out to have been based on work he did in the 1960s. The relative antiquity of this climate history might explain the reluctance of the IPCC to explain its provenance. What was still more surprising was that Lamb’s work turned out to be largely based on the Central England Temperature Record, a long series of instrumental readings, which dated back to the mid-seventeenth century. In other words, the understanding of world climate history propagated to the public by the IPCC was based, not on any understanding of global climate, but on the records for just one part of England: an odd situation to say the least.

[This seems a really odd admission when on the very page in the IPCC report has the figure in question, there is a reference to Lamb (1988) when there is mention of the Medieval Warm Period. If Montford had bothered to simply look at that reference he would have found several figures by Lamb that are clearly the basis for the figure. He could also have simply contacted one of the authors of the chapter in question. One wonders why he, or some other climate contrarian didn’t just ask.]

A few months after Deming’s revelations about the fate of Huang’s paper, the second IPCC report picked up on the changing attitudes towards the Medieval Warm Period. The report’s authors noted that:

Based on the incomplete observations and paleoclimatic evidence available, it seems unlikely that global mean temperatures have increased by 1°C or more in a century at any time during the last 10,000 years.

and went on,

The limited available evidence from proxy climate indicators suggests that the 20th century global mean temperature is at least as warm as any other century since at least 1400 AD. Data prior to 1400 are too sparse to allow the reliable estimation of global mean temperature.

This represented a significant change in emphasis by the IPCC. The story in the FAR, of a pronounced Medieval Warm Period with temperatures exceeding modern ones, had been replaced by a new narrative, in which it was said that modern warmth was probably unprecedented – or at least as high as anything seen in the last six hundred years. And if anyone were to question how all the historical records of warm temperatures in the medieval period could be wrong, it was explained that these were a regional phenomenon and that overall, the globe appeared to have been no warmer back then than it was at present.

[Uh, not sure where to start here. The IPCC are being very candid and transparent here by saying that current temperatures (late 20th century) are “at least as warm” as any other time after 1400. They could have easily been less equivocal, as the data were pretty strong that current temperatures were higher, but as scientists, they were typically being conservative in their conclusions. It also needs to be noted that they are not even commenting on how current temperatures relate to the Medieval Warm Period, as they felt the data for that period were too unreliable. Montford claims this is because the work of Huang (1997) had originally been rejected by Nature. Having a paper get rejected by Nature is no great accomplishment, it happens all the time. Thus, it is rather unfair to fault the IPCC 2nd Report, released in 1995, for not taking into account data from a paper that was not actually published till 1997.]

What then of the findings? The abstract of the paper explained that Mann and his team had been able to reconstruct temperatures since the year 1400 and that recent temperatures were warmer than any other year since the start of their records. In the remainder of the paper, they went on to assess possible reasons for the dramatic change in temperatures by testing how the graph of their reconstruction correlated against possible causes (‘forcings’ in the jargon), such as atmospheric dust, solar irradiance and carbon dioxide. It will be no surprise to anyone that their conclusion was that the only potential culprit was carbon dioxide. The implications were once again clear: mankind was warming the globe. Here then was the beginning of the end of the process of getting rid of the Medieval Warm Period. All that was lacking was a degree of publicity, something that was to be dealt with in fairly short order, as we will see.

[Montford is confounding the argument. The IPCC made no judgments about the Medieval Warm Period, so how can they be blamed for getting rid of it? This is a dishonest way of telling the history of the hockey stick and the IPCC. I am pretty well acquainted with the reports over the years, and the Medieval Warm Period has n3ever been “done away with.” In fact, very little about it except for more accurate data than that of Lamb being applied to the phenomenon. All that aside, warming now is clearly unprecedented compared to the Medieval Warm Period. Also, a lot more study has gone into the causes of that period, since it does stand out as a unique phenomenon from the “recent” past. What Montford forgets to say is that the factors that are now assumed to have caused the Medieval Warm Period are not even present during the warming over the last 30 years. It seems that Montford hopes that just casting baseless doubt will cast doubt on the hockey stick itself.]

It was a startling change and it was this that made the Hockey Stick such an effective promotional tool, although to watching scientists, the remarkable thing about the Hockey Stick was not what was happening in the twentieth century portion – that temperatures were rising was clear from the instrumental record – but the long flat handle. The Medieval Warm Period had completely vanished. Even the previously acknowledged ‘regional effect’ now left no trace in the record. The conclusions were stark: current temperatures were unprecedented.

[Wow, way to mischaracterize the initial paper. The results remain as true now as they were then, and have been corroborated independently more than a dozen times. Also, as already noted, this paper did not do away with the Medieval Warm Period, it didn’t even cover the part of the record that included it.]

Interestingly, beneath the headline, much of the article was actually taken up with discussing doubts about the reliability of the study. One scientist quoted in the New York Times article wondered if it would ever be possible to get a temperature reconstruction that was reliable enough to tell if the current warming was unprecedented or not. Even Mann himself was quoted as saying that there was quite a bit of work to be done in reducing the uncertainties.

[Exactly, the scientists themselves were very conservative in their assessment of the meaning of their results, so why is Montford faulting them? He seems to be upset that the results were taken too seriously, and blames the scientists for a conspiracy. You can’t have it both ways. If they were part of a climate change conspiracy, why would they be so careful about stating their case too strongly. The media may have overstated the case, but you can hardly blame Mann and colleagues.]

Again, we can only stand back in admiration that someone who had published his PhD a matter of a year or so earlier could be invited to head the team writing one of the most critical chapters in one of the most important scientific reports written for decades. Mann had certainly made an impact in the climate world.

There was one major problem with the case for the Medieval Warm Period having been an insignificant regional phenomenon though. This was the paucity of hard data to support the case – the ‘limited available evidence’ referred to above. It was simple for critics to point out that any conclusions drawn from this data would have to be highly speculative at best. Climate science wanted big funding and big political action and that was going to require definitive evidence. In order to strengthen the arguments for the current warming being unprecedented, there was going to have to be a major study, presenting unimpeachable evidence that the Medieval Warm Period was a chimera.


[This paragraph betrays Montford’s lack of scientific literacy in two ways. Firstly, and most blatantly, he shows a complete lack of understanding about the way academia works. Although Mann’s rise in prestige and accomplishment was rapid, it was by no means unheard of. His rise is not due to some conspiracy that wants to elevate him for loyalty, as Montford seems to insinuate, he was just a very driven researcher, and he just happened to be working on the right stuff at the right time. Mann is also a good writer, a definite asset for someone who rises quickly through the ranks. The second thing that betrays his lack of scientific literacy may seem a bit nitpicky, but no scientist worth his salt would use the word “data” in a grammatically incorrect way. Data is a plural noun. Montford has treated the word in this paragraph as singular, i.e. “this data.” Maybe he just has a poor editor. If this is the only place where he does this I might be inclined to accept that, but I will be keeping my eyes open to see if he uses “data” correctly elsewhere in the book.]