Please explain Climategate.

I have been reading about this lately but I haven’t been having a lot of success cutting through the religion on both sides.
Please explain what the implications are for the Global Warming debate.

Pithy dismissals where you get to reiterate your party line’s stance are the opposite of what this thread is asking for.

Both sides have charged the other side with conspiracy. One of the best defenses against any charges of conspiracy is the lack of evidence for a conspiracy: Historically, no large conspiracy has been able to continue for a particularly significant amount of time without evidence leaking. Hence, if there was one, you would expect such evidence to be out there.

The leaked emails may or may not constitute that evidence.

Of course, on the other side, there has already been such evidence for over a decade. And the leaked emails, from what I can tell, are hardly a slam dunk. They seem mostly to chronicle scientists’ glee in being able to act meanspirited towards anti-AGW morons. Which is, while unethical, hardly to be unexpected.

In terms of actual effect, though, I imagine that this leak will worsen the situation for public belief in AGW. Most people will hear the accusations and assume that they are a slam dunk or they wouldn’t be in the newspaper.

In essence, some stolen e-mails have shown that certain prominent scientists have fudged data, tried to blackball their opponents, and have tried to hush up recent data that suggests global warming HASN’T been proceeding according to their predicted timetables.

What are the implications? Well, it goes without saying that:

  1. The yahoos of the Right will proclaim, "See? We TOLD you there was nothing to worry about, and the whole thing was manufactured by a bunch of lying, left wing scientists.

  2. Liberals will insist that this changes nothing, that global warming is an established fact that only a few ignorant bozos deny, and that “Climategate” is a non-issue (the “real” issue is finding and prosecuting whoever stole those e-mails).
    Me?I think the scentists in question are a lot like Mark Furhmann. They have a good case and genuine evidence, but not enough for an open and shut case. So, they’ve resorted to trickery and bullying to bolster a case they truly believe in.

Scientists at the CRU probably did, in some cases, change the data in order to produce statistics and graphs that were more favorable towards interpretations of man-made global warming. For example:

They also discussed ways to prevent articles that contradicted their viewpoint from being published, based not on the validity of the articles but rather for political reasons.

It should be noted that there’s no evidence of any of this affecting more than a small amount of the data from CRU, which is in itself one of literally thousands of research groups around the world that have confirmed the existence of man-made global warming. A rational response would not require any serious re-evaluation of the scientific consensus that man-made global warming is occurring.

My take on it is that it’s neither going to completely discredit AGW (as the ‘deniers’ claim) nor is it going to prove nothing (as the GW ‘faithful’ are saying), but settle out somewhere in the middle. What I HOPE it does is allow for more rational discussion, and less automatic dismissal of people who aren’t in lock step with the AGW theories current being touted. Even if it turns out that they are wrong, the attempt to completely shut down the debate is unscientific and makes my teeth itch. Science is all about debate, and I don’t think that ANY scientific question should ever be shut down, or any line of valid research shunned, no matter how un-promising it seems at the time. Some of the greatest discoveries have come about from scientists who went against what ‘everyone’ ‘knew’, and ended up taking things in entirely new directions. Look at Darwin and Einstein, for instance.


Yes and no. Yes, there are other lines of research which show roughly similar effects. But there is a lot of correlation going on here. The way science works is that other papers build on the conclusions of previously peer-reviewed works. It’s certainly not clear to me how much of this material has ‘contaminated’ other findings. How many other papers based their original research on the background of data that came out of CRU? How many papers have been rejected in the peer-review process because their results differed from those derived by CRU’s published work? How many other models have been ‘tweaked’ to bring them into correspondence with CRU’s output?

I would agree that this doesn’t invalidate AGW. But how much does it add to the uncertainty around the numbers?

I think it’s reasonable for a non-partisan scientific review board to conduct a meta-survey of the literature looking for contamination caused by any bad data that has come out of CRU. That board should also re-examine papers that were rejected by journals that CRU and others were pressuring, perhaps setting up a new peer review process for those papers which appear to have some merit.

That review may find that the CRU shenanigans are isolated and have little effect on the consensus, or it may find that certain findings have been unfairly excluded from the literature, or that many of the other foundational works in the AGW field based assumptions on data that is now in question. I really don’t know. It seems worthwhile to find out, given the stakes.

Here’s the actual email:

To me it looks like people discussing data and how to work around inconsistencies of data and using what they know to solve those inconsistencies, rather than that they are killing data to match their expectation. I wouldn’t be surprised to see that you get just such a thing in any area of research, and that doing so is in fact part of the job description. I would recommend reading the actual emails rather than the editorial commentary of them.

Heh, generally I remain agnostic on the Global Warming debate because I don’t feel at all qualified to have an opinion either way, and also because it just feels premature to me to think we know everything about the intricacies of climate, particularly since our accurate data doesn’t even go back a century.

But…get a juicy conspiracy, and I’m there dude. :wink:

Thanks for all of the answers, I am getting a better sense of it. I think I avoided the debate for too long and left myself ignorant on the particulars. I’ve always felt like the Global Warming debate distracted from the practical concerns of pollution control on a localized level.

A good middle-ground commentary has been written by Judith Curry, who is certainly a climate scientist and far from an AGW skeptic. An extract:

“While the blogosphere has identified many emails that allegedly indicate malfeasance, clarifications especially from Gavin Schmidt have been very helpful in providing explanations and the appropriate context for these emails. However, even if the hacked emails from HADCRU end up to be much ado about nothing in the context of any actual misfeasance that impacts the climate data records, the damage to the public credibility of climate research is likely to be significant. In my opinion, there are two broader issues raised by these emails that are impeding the public credibility of climate research: lack of transparency in climate data, and “tribalism” in some segments of the climate research community that is impeding peer review and the assessment process.”

Full comment here:

Regarding journal pubs:

I know that some journals will allow you to blackball certain reviewers. A friend of mine is in a theoretical fight with a researcher at another university. My friend has to request that this other researcher and anyone that has trained under that researcher be excluded when she submits articles. Before she did this, she would find that “reviewer 3” had the following nasty shit to say about your article, all focusing on one of the underlying theories. It was obvious who was doing it, and why. It was also getting in the way of science.

Now - if you are an AGW skeptic researcher, how the hell can you ensure that your journal submissions will be read by someone with an open mind to your research, given the nature of these emails? And if you build too large of a blackball list, sooner or later the journals won’t bother with your submissions I would assume.

But how would they arrive at such a position of power, to enforce their views on the community at large? Might that not be a function of the consensus that has formed around the viability of the AGW science? For an orthodoxy to shut out criticism, it has to first be an orthodoxy, no?

True enough, “scientific” articles that criticize elementary Darwinism are likely to get short shrift. Is that a conspiracy on the part of anti-religious agents, or does it reflect the solid effect of an arrived consensus? A consensus of persons, I remind, trained to use and revere skepticism, with mad math skills to boot. So who fooled them? ACORN?

The editorial from Nature regarding this subject:

Well, one could look for evidence of the actual bias or unjustified actions to stop contrarian publications, but there is very little evidence for that, one anecdote does not make this bias a reality:

That was the best show of bias before the hacked emails case, and even now I see that they have failed to find any evidence that what the biases mentioned in the emails were acted upon. So far the few examples found point to just wishful thinking. Reprehensible, but wishful nevertheless.

The average person that has not picked a side will be looking hard and long at the folks who pushed kyoto and are pushing copenhagen. The best thing those scientists could have done at HadCru was simply admit incompetence with data handling, storage and admitting that their people are not socially adept.

The fight that they lost, was to let the general public doubt their core comptency which was proving climate change, and generally sound a lot like politicians who have had their hands caught in the cookie jar.

As to the implications, I can’t say. I’d probably wait until the next round of funding comes up to see if anything has really changed.


For me there is an underlying problem with the work at CRU, one I find deeply troubling. In addition to the leaked emails, the source code, and a long personal log by one of the software people has become available. This log makes very interesting, and disturbing reading. A number of people have begun to read though and discect the source code. And a lot of discussion of the personal log is also ongoing. This log is the Harry_readme log.

I spent some time reading the log, and read through some of the source code. I don’t share a lot of the more vitriolic feeling about the code or the state of play that some others do. I have been there and done that, and recognise a lot of the problems that beset the poor guy that was doing the work. But, none the less, there is a very real and significant problem. One that really seriously calls into question pretty much all the work that came out of CRU.

There was absolutley no trace of even minimal scientific safeguards on the work done. The computer programs were not subject to even the most minimal software engineering controls, and the science was not subject to any sort of checks. The story of the log is of a programmer, over a period of three years trying to patch together a poorly documented, buggy and incomplete mess so that he could recreate the entire climate history that CRU had generated, and had, in an unexplained accident lost. Lost. This was done to a deadline that had to be met to allow the next version fo the data to be generated.

Nowhere in any of the story is any suggestion of validation of the data models used. Ad-hoc numeric models are used to interpolate data, and often poor, ill understood, tools used to perform other data processing steps. Much data is syntheticly derived, and the whole system of dubious stability. Yet there is no trace of any analysis being done to validate any of the processing. In a number of places in the log, processing steps are done that I would regard as very dubious. The only justification for the use of some steps, or parameters used, being that they seem to create good looking data.

I have worked with similar code, with similar provenance data, and feel a great deal of empathy for the poor guy in question. I have also seen how very easy it is to have things go wrong in subtle ways, yet ways that can totally ruin the validity of the results. Hard won experience tells me that this work has not been done well enough.

From the programming point of view - stuff we would expect in any professional programming project - requirements, code reviews, regression testing, whitebox testsing, black box testing, acceptance testing, source code control. The reality? None of these.

Worse, from the science point of view. Sensitivity analysis, validation with synthetic data sets, peer review of methodology. All missing. There remains the sneaking suspicion that the whole thing is sufficiently poorly executed, and ill conditioned that with the right choice of fiddle factors any result we wanted could have been created.

In many ways I feel betrayed. Indeed I felt quite angry working my way though a lot of this. As a scientist, and someone who has had to teach people some of these issues for a living, I find the whole thing leaving a bad taste. These guys are supposed to be better than this. We trusted them to be better than this. But they weren’t.

This isn’t going to go away, at least it shouldn’t be allowed to. All the carping about petty spats in the emails is missing the point. This was poor quality science. It was B grade academic messing about masquarading as A grade international quality work that the world could rely upon the integrity of.

What it will do is call into question the quality of work from the remainder of the climate scientists involved. This is probably a healthy thing. The stir this has caused should be dirfected at ensuring a totally open scientific research effort. Hiding behind confidentiality agreements isn’t acceptable. The issue is one of global concern. Possibly the most important scientific issue of global concern we have. This has not been well managed, indeed it clearly has never been managed. And this isn’t acceptable.

Just being aware of the languages used in the science world, I never had any illusion that most or any of the coding done was going to be particularly good. Probably you could easily boost the output of most climate simulators by a few thousand times simply by rebuilding it all with a real programmer behind it.

But overall it doesn’t worry me because that’s still the same standard by which all of our science is being done today. We’re sending ships to Mars based on it. And more importantly, there’s the output of several different climate simulators all coming to, more or less, similar results. The basic math for radiative heating of elements and the ocean, absorption of CO2 by forests, and wind/ocean currents being pushed along are all fairly straightforward concepts. The code is big and ugly more because it’s poorly written code that’s been modified and added to successively for 20 years than because it’s theoretically complex. But because it isn’t theoretically complex, even so, you should be able to tell when things are pretty wacky. I’ve seen plenty of ugly old, mutilated code that’s been further munged by intern hackers that, in spite of all odds, miraculously does exactly what it is supposed to do. I refuse to take on the task of working with such code at a coding level, and I fully expect it to crash and die if blown on too hard, but I’m still willing to admit that it works correctly just so long as you don’t blow on it too hard.

The current state of all coding is scary. The fact that the world continues to turn every single day eases that fear to basic annoyance, and I live with it and don’t think about it. I just have to trust to the observation that against all odds, in the end, even horrible coders working on massively poor code just damned keep making stuff that gives the right answer.

I would have to disagree. I have seen exactly the opposite. Well conditioned problems can be quite robust to bad treatment, but other problems can easily produce plausible rubbish. It isn’t just the poor coding practices either. The underpinning science is a worry. That is why I talk of sensitivity analysis. Nowadays the vast majority of programers have never heard of numerical analysis as a subject area. Most would not know the meaning of machine epsilon. The trouble is, such programmers are not competant to code any numerical problem. But they don’t know enough to realise this.

The guys that write the software for space missions run to a very very different standard. Their coding regimes make earthbound safety critical systems look sloppy. And even they get it wrong from time to time.

Minor nitpick, totally off topic: Darwin didn’t go against what everyone ‘knew’: by the time Darwin got in the game, a lot of people, including a fair number of scientists of the day, already believed in evolution. But nobody before Darwin had a good theory explaining by what process or mechanism evolution took place.

That’s part of the problem. With ‘regular’ code, you know what the results should be, and use them to verify that the code is correct. You can’t do that with a climate model, because you don’t know what the results should be.

But the climate scientists fell into a circular logic trap. They expected warming, then wrote a bunch of code to conform to their expectations, and then try to actually use that code as evidence that their expectations were right.

But the fact that the code matched their expectations doesn’t mean it’s correct, unless they can verify their expectations via other means.

You are falling for the say so of the ones benefiting with the theft of the emails.

You are forgetting that informal communications among scientists and researchers would be the last place I would look for validations, when the context is missing you allow contrarians to control the discussion unfairly, and in the end one is supporting their unethical efforts.