How will global warming play out?

60 Canadian scientists recently wrote an open letter to the Prime Minister of Canada saying “If, back in the mid-1990s, we knew what we know today about climate, Kyoto would almost certainly not exist, because we would have concluded it was not necessary.” Do you think they believe in the climate models?

Dr. Claude Allegre, a leading French scientist who is a member of both the U.S. and French National Academies of Sciences, recently announced that he no longer believed in AGW, and now says the cause of global warming is “unknown.” Do you think he believes in the climate models?

Dr. Chris Landsea recently resigned from the IPCC, saying “After some prolonged deliberation, I have decided to withdraw from participating in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). I am withdrawing because I have come to view the part of the IPCC to which my expertise is relevant as having become politicized. In addition, when I have raised my concerns to the IPCC leadership, their response was simply to dismiss my concerns.”

The field of climate science is in total disarray. A lot of people have both their livelihood and their professional reputation tied up in supporting and sustaining the AGW agenda. If there is no significant AGW, they’re out of a job. As Dr. Landsea noted, much of what passes for climate science is simply politics.

I don’t see scientists whose jobs depend on the existence of AGW saying their findings are “uncertain”, as you claim. Nor do I see them admitting the “incompleteness of the existing climate models”. A more typical reaction is that of Santer et al. When theirstudy found that the model results differed from the UAH lower troposphere temperature data, and the RSU lower troposphere temperature data, and the radiosonde lower troposphere temperature data, their conclusion was that either “models fail to capture such behavior, or (more plausibly) that residual errors in several observational datasets used here affect their representation of long-term trends.”

Right … the models disagree with three different sets of observational data, so the most likely explanation is the data is wrong …

Here’s another example. We all know that the models are adjusted to duplicate the historical record by “tuning” various parameters. This is acknowledged by the modelers as part of the standard practice.

Now, once you have a model with certain specific forcings (CO2, volcanoes, whatever) tuned to match the historical record as best it can, it is obvious that removing one of those forcings will make the match less accurate. After all, it was tuned with that forcing included, so removing it will make the model forecasts not fit the historical reality.

Does this prove anything about the particular forcings used in the model? Absolutely nothing at all, because it is quite possible that 1) not all relevant forcings are included to start with, 2) the model may not be correctly replicating the effect of any or all of the forcings in the real world, 3) the model (which has not been tested by V&V and SQA) may be getting realistic looking results simply by error, and 4) the model has not been re-tuned with the reduced forcings to match the output to the historical record.

Despite this, it is quite common to seegraphs that claim to show that CO2 is needed to explain the historical record using this exact method.

I hardly think that shows that “scientists are aware of the incompleteness of existing climate models”. In fact, it shows blind faith in the models, and a complete lack of understanding of the implications of the fact that the models are not based on “first principles”, but are tuned to match the historical record.

You seem to be suffering from a mistaken belief that the majority of scientists can’t be wrong. History provides ample examples of this being the case. Here’s a few:

S. Chandrasekhar - Black Holes.
Ernst Doppler - The Doppler Effect.
Galvani - Bioelectricity.
William Harvey - Blood Circulation.
Galileo - Copernicanism.
Copernicus - Geocentrism.
Robert Goddard - Rockets.
B Marshall - Ulcers caused by bacteria.
Barbara McClintock - Transposons.
J. Newlands - pre-Mendeleev Periodic Table.
George Ohm - Electrical Resistance.
Louis Pasteur - Germ theory of disease.
Stanley Prusiner - Prions.
Alfred Wegner - Continental Drift.

In each of those cases, you could have made the same comments that I find you and others making, that the science was settled, that everyone agreed, that there was a “consensus”, that they all believed in the “climate models” or whatever the subject was, that the scientists couldn’t be wrong … but science does not depend on consensus, or on how many scientists think that climate models are accurate enough to use.

I’ve said it before, and I’ll repeat it here. You wouldn’t fly in an airplane if the software hadn’t been tested. The testing, while complex, is neither particularly arduous, expensive, or time-consuming. That’s why it is routinely done on every mission-critical piece of software that we use – in airplanes, submarines, moon shots, subways, missiles, train control systems, air traffic control, and all the rest of the situations where either human lives or large amounts of money depend on the software being correct.

So why are people so reluctant to test climate models that are dealing with what you seem to think is one of the most critical questions of our time? jshore says oh, it’s not usually done in science, which is true. But a mistake in a scientific computer program modeling, say, the dispersal rates of termites in Angola is scarcely mission critical. According to you, the climate question is most definitely mission-critical, and the models should therefore be tested. Until they are, anyone who trusts them has a whole lot more faith in computers than I have … and I have been programming computers since 1963, and have written computer programs for a living.

I know from bitter experience that it is quite possible to get a reasonable looking answer simply because there is a mistake in the program … and how that doesn’t become visible until you try the program on some new data.

I also know that non-linear chaotic systems, systems containing turbulence, are the most difficult systems to model, and that the climate system is the most complex non-linear system humans have ever tried to model. You can believe in the models if you want, but me, I’m a realist … I look at the range of forecasts provided by models, ranging from 1° warming in the coming century to 8° warming, and my response is “not ready for prime time”. The range of results alone shows that we don’t understand how to model the climate or forecast the future. It is a grade school response to this situation to say “let’s average them, and take that as the answer”. An average of bad data is still bad data.

w.

I think you are mixing in two different issues here. One is that there are various emission scenarios that are created to represent the fact that we don’t know what future course our society will take in terms of its growth of emissions. This depends on choices we make and is thus hard to constrain probabilistically. In other words, it is largely in our own hands. Second is the uncertainty in the climate sensitivity, often expressed as being somewhere around 3 deg C for a doubling of CO2, with plus-or-minus 1 or 1.5 deg C. While it may be true that the IPCC did not attempt to assess the probability of these different sensitivities, there have been some papers that have attempted to do so. For example, James Annan is one person who has been interested in this issue.

I am not sure exactly what you are speaking of here. The models have been, for example, run with perturbations to the initial conditions. And, the climateprediction.net experiment, in addition to doing this, has also been varying parameters in the models over various ranges. These are just two examples that I happen to know about. I imagine someone more up in the field could give other examples.

Well, I admit that this is a more complicated problem than simple fire insurance. However, by constraining our emissions now…and putting the incentives on the market to develop lower emissions technologies (be they energy efficiency, renewable energy technologies, or carbon sequestration)…we are insuring against being able to later prevent more catastrophic occurrences or the expenditure of much more serious amounts of money to take draconian measures to prevent such occurrence. See for example, this article: To Hedge or Not to Hedge Against an Uncertain Climate Future, Yohe et al, Science 306, 416 (2004).

I noticed that you didn’t say “Canadian climate scientists” which means that your statement is only untrue in the fact that many were not Canadian. In fact, they had to cast their net far and wide to dredge up 60 names…and even then they had one renounce the statement later. And, 90 Canadian scientists who were actually Canadian wrote a counter-letter.

And, here are links to lists of scientists who support creationism. [I am not saying that things in climate science are nailed down exactly as well as evolutionary theory…but the basic point is that you ain’t ever going to get unanimity.]

First of all, Santer et al. clearly state in the abstract that there is one data set that does agree with the models. And, they are arguing that this is more likely to be the most correct one. As you know, in fact, the history of the lower-troposphere temperature analysis is that those satellites were not made to follow long term temperature trends, there are various subtle corrections that have to be made, and that the data was thus wrong! The UAH group of Spencer and Christy has had to make correction after correction to the data. And, you fail to note that about a year after the Santer paper was published, the U.S. Climate Change Research Program (formed by the Bush Administration) released their first report which basically noted that the data and models were now reconciled on a global scale and, while there remains some disagreement (depending on the data set) for the tropics, the committee deemed it more likely a problem with the data than with the models. And, Christy was even on that committee.

Your description here is really way too simplistic. For one thing, there are many many more degrees of freedom than there are parameters…and many of the parameters are basically tightly constrained. For another, even when they have tried to play various games to try to make the natural forcings produce the late 20th century warming, they’ve found that it can’t be done simply because the net forcing over the last 30 years from natural factors is basically nil…if not negative.

I fail to see what this list shows. In many of the cases, what you are talking about is a new theory that came along to explain known deficiencies in the existing science.

Also, for every Pasteur and Wegner and Galileo, there are probably a thousand people who really are crackpots. So, I don’t see what your list tells us in terms of how to apply policy making to science. Does it tell us to never let the science impact policymaking because the science may be wrong? What exactly is your prescription? You seem to be arguing that we can’t take action until we are more certain. However, why not instead argue that we can’t chance emitting any more greenhouse gases until we are more certain? Of course, the intelligent thing to do, as the Science article that I linked to above on hedging discusses, is to start ramping up a pricetag on carbon emissions.

Yes, and I have actually been doing computer modeling in the physical sciences since the early 80s. And, you are missing my point which is that the way you propose to test the models is simply not a useful way to test models in the physical sciences. It is a technique that is appropriate when your software has a very specific, limited scope and mission. It is not appropriate when your software is modeling a complex physical system, trying to get a handle on a broad range of questions.

Did you read the Santer paper, or just the abstract? Despite their claim in the abstract, for the most part their results disagreed with all three datasets, and only occasionally agreed with one. Take a look at their Figure 2, where in every case, the model results were the outlier. Also take a look at the range of the different model results in Figure 2, from the floor to the ceiling. I suppose they think that means that the model results agree with the data, but all it means is that the model results don’t even agree with each other, much less with the data. This disagreement with the data was even worse in the Santer 2006 paper.

Regarding the Climate Change Science (not “Research”) Program report, you’ve changed topics without mentioning it. We were discussing the disagreement between models and data regarding the tropospheric vs surface temperatures. The report did not say the model’s surface temps vs troposphere temps were agreeing with the data, nor did it say that models and data were “reconciled at the global scale”. It said:

If the models give a range of results … then what does “no fundamental inconsistency” mean? They note that about half the models show the troposphere warming more than the surface, and the other half show it warming less … which means, of course, that half of them are wrong. Surely that’s a “fundamental inconsistency”.

Finally regarding my previous comments about the danger that these types of reports don’t represent all views, the New York Times said :

Your comment about the UAH results, that “The UAH group of Spencer and Christy has had to make correction after correction to the data.”, only reveals that you don’t understand how science works. The value for, say, the mass of the electron has had “correction after correction”. Does this mean that the current value can’t be trusted? Quite the opposite.

Spencer and Christy were the first people to derive global atmospheric temperatures from MSU data. Did you expect their initial efforts to be perfect and error-free? Spencer and Christy’s work has been subjected to intense scrutiny, as it should be, and each correction has made it more accurate, not less. It is notable that the errors which have been found have been smaller than Spencer and Christy’s confidence intervals for their data. A full list of the corrections that they have made to their data is here, and is an interesting example of science at work.

I’d like a cite that the parameters are “tightly constrained” and a cite that they’ve “tried to play various games” to make the natural forcings fit, and a cite that all the natural forcings have been included. For example, it has recently been shown that cosmic rays affect the clouds and thus the climate. As far as I know, no climate model includes that forcing, and it is not the only forcing that is not included in the models. Despite its huge role in the CO2 and methane cycles, most models don’t include any of the natural biosphere forcings. Half of the models in the Santer study didn’t include the sun as a forcing, for heavens sake. Two thirds didn’t include land use changes, three quarters didn’t include sea salt (the main cloud nucleating material over the ocean). What kind of “games” could they have played with forcings that are not in their models?

And of course, if they can’t make it fit with natural forcings, this may indicate nothing more than that the model isn’t working … which, since you think the models aren’t worth testing, is hard to determine.

Whether there are less parameters than degrees of freedom is not the question. Regarding the number of parameters, Freeman Dyson talks about his meeting with Enrico Fermi:

Parameters are very tricky, and need to be handled with great care. Is this being done in the models? Umm … we don’t know, they’re untested.

I told you my prescription, to test the models so we can find out if they are reliable, and find out which ones work better than others, and why they work better. Until then, we’re flying blind.

The list tells us that we should not be lulled into thinking that the consensus in science is always right, and that when there is significant disagreement with the “consensus” by a number of scientists, not "crackpot"s but eminent scientists who teach climatology at places like MIT and UAH and Colorado State (which has an renowned climatology program) and who are well-known for their achievements in the field of climatology, we should pay attention to that disagreement and not assume that we know what we are doing.

You say we “can’t chance emitting any more greenhouse gases”, but you still haven’t presented anything other than model results and simplified grade school physics to back up your claim. But even supposing you are right, do you have a plan that can significantly affect the emission of greenhouse gases? If we could do it at a reasonable cost, yes, we should. But Kyoto can’t do it even at a huge cost, we’ve proved that, the countries that signed up haven’t come anywhere near their goals despite spending billions of dollars. What is your brilliant plan? Please give details of such things as what effect it might have on global temperatures. “Ramping up a pricetag on carbon emissions” won’t touch most of the world’s carbon emissions, and will have little effect on temperatures, but it will a big effect on the global economy. You might not think that’s a bad thing … but then you’re not at the bottom of the economic pile.

I am not proposing some new or unusual “way to test the models”. There’s only one way to test computer software, and it’s the same whether you are testing science software, airplane software, kid’s games, modeling software, or Mars mission software. You root through the pile of code and verify the underlying assumptions, you make sure the equations converge, you check to see that the code does what it is claimed to do. You look to see that the version being used has a procedure to document changes, and that the people running it know what they’re doing. You make sure there are no stupid arithmetic errors. You see how it handles exceptions and out-of-bounds values. You verify that all sections of the code are actually called at some point in the program. You make sure that your parameters have physically reasonable values. You look at error propagation through the system. You diagram the logic and make sure it’s logical. It’s not rocket science, it’s called V&V and SQA, and it’s the same no matter what kind of software you are testing.

The only difference is how rigorously you test your software. If the results of your models don’t make much difference, you don’t test it much. No one cares if a kid’s game has an obscure bug that gives wrong answers once in a while. As long as it’s flashy and runs most of the time, you’re good to go.

But if human lives or lots of money depend on software, you test it to the limit of your abilities. Your intransigent unwillingness to test the software that we are basing billion dollar decisions on speaks volumes about whether your beliefs are actually based on science or faith.

w.

Here’s a further example of our lack of understanding of the climate system, and why we need to find out what the problem is before we spend billions trying to fix it. From NASA, emphasis mine:

Do you see why I say there is no consensus, and that the science isn’t settled, and that the climate is not understood well enough to make multi-decadal predictions? This isn’t “crackpots” disagreeing with “real scientists”, this is NASA discussing the limits to our knowledge of climate. If this study is correct, we could spend billions slowing CO2 emissions and not make much difference at all to the climate … I don’t think our grandchildren would thank us for that. In the Santer study, only 5 of the 19 climate models even considered land-use changes, and we have no idea whether their considerations were correct …

w.

I just wanted to say that the peanut gallery here is following this discussion avidly…keep up the great debate jshore and intention! Though I can’t keep up on the technical side, a lot of what intention is saying jibes with what my friends at the labs here say to me over beers…especially his point about the models and how really solid the science is on this thing.

Keep it up guys! This has been a VERY good debate about GW! FWIW, kudos to both of you.

-XT

You have misunderstood the Santer paper. For one thing, ALL observational data sets and models agree that in the troposphere there is amplification of month-to-month temperature variations that occur at the surface. They note that this is not surprising since it is predicted by quite simple theoretical considerations. Unless some not-understood new physics intervenes when one goes to lower frequencies (i.e., looks at variations over larger times), one would expect to see the same pattern on multidecade time scales. In the models, one does see this robustly. However, it is not seen in most of the data sets (but is seen in one). That is the primary issue they are discussing.

It is also strange that you are so highly critical of this paper also since parts of it have already been vindicated. For example, they note that the UAH multidecadal trend in the tropics is particularly unbelievable…and, in fact, soon after this paper was published, the UAH group of Spencer and Christy admitted that they had screwed up a correction. Since this correction was particularly important in the tropics where it changed the trend significantly, one might expect that their data is now less anomalous (although I don’t know if anyone has attempted to check that).

Furthermore, the conclusions of the Santer paper are closely echoed by the report of the U.S. Climate Change Science Program (thanks for the name correction) when they say (in regards to the tropics specifically):

Since this report was put together by a group that includes Christy and it says basically what Santer et al. said, are you damning this whole report too?

Where to start?

(1) If you look at your link, you will find that in fact the claim of staying within the confidence intervals refers only to the fact that version5.2 was within the confidence intervals of 5.1. The v5.1 value was 0.088 ± 0.05 C per decade whereas v5.2 was 0.123 C per decade and it appears to have drifted up a bit more since then. See, e.g., update 5 Dec 2006. If you go back in that file, in 2001, their claimed trend was 0.044 C per decade. They don’t list the confidence intervals, but unless they were a fair bit larger than their confidence intervals in v5.1 then the trend is now outside what ithe confidence intervals would have been then. And, of course, if you go back further to when they had not yet taken into account orbital decay, their trend was actually negative. Skeptics cited this fact for years (many for years after it had been corrected in fact) as evidence that warming was not occurring throughout the troposphere. Yes, Spencer and Christy’s current estimate is more accurate than their previous ones. However, there is no reason to believe that it won’t continue to trend upward in time as it seems to have done consistently and would put in closer to the RSS result. And, in particular, there is no reason to believe there are not some other issues with the results in the tropics.

(2) You seem to have a wee bit of a double-standard. When a scientist agrees with AGW, you dispute every utterance but when they are skeptics they can do no wrong. You hold up Spencer and Christy as an example of “science at work” even though their original work had the field believing for many years that there was a conundrum that turns out not to exist, a conundrum that “skeptics” exploited relentlessly. On the other hand, you are allied closely with a website that has been dedicated to vilifying the work of Michael Mann et al., whose original paper subject to so much scrutiny is now almost 10 years old. And, while it may have been sloppy on a few mathematical details, it has had most of its important findings largely confirmed by other independent studies using different techniques. Hell, Mann et al. even had to put up with a Congressional witchhunt by Rep. Barton. Although I may be a bit critical of Spencer and Christy’s work, I would never suggest that Congress ought to launch a witchhunt and ask them to provide lots of obscure information! I prefer to let the normal process of science takes it course without having Congress try to weigh in on the matter.

I understand the Santer paper quite well, jshore, and have studied and written extensively on it. I’ve looked at the individual model results, checked them for inter-quartile ranges, trends, and outliers, I’ve analyzed their first differences, compared them to the data in a variety of ways, graphed the conclusions, calculated the auto-correlation to see if the conclusions were valid (they weren’t, but that’s another story), examined the heteroskedasticity, skew, and kurtosis of the models and data, tested them using the Jarque-Bera test for normality, calculated their Durbin-Watson statistic … and unless you can say the same, I fear that you may not understand the study.

Those models do abysmally poorly at emulating even the simpler, lower-order behaviour of the tropical ocean. They warm up too much, or too fast, or too slowly, or not enough. The continue on a warming or cooling trend far past what the ocean does, or they go from warming to cooling in a short time, in a way never seen in nature. In short, they do not pass even the simplest tests that would allow us to claim that they are “modeling” the real world — to successfully model the real world, your model has to act and react and respond like the real world, and Santer’s models do not do so.

You want to believe them, and since you think models shouldn’t be tested, I suppose you see no reason not to do so. Me, I prefer my models to be lifelike, so I tested them, and they aren’t lifelike. Their results disagree with each other, often violently, and their results are nothing like the real world. Because of this, the study is useless.

Take a look at figure 2, like I asked. Even with the correction, the UAH results are still below the RSS results, Santer is still the outlier, the datasets agree with each other and not with the models, and the range of the models still goes from the floor to the ceiling. The UAH results have moved up slightly, but that’s the only change. When some of the models show extreme warming, some show extreme cooling, some show moderate warming, some show moderate cooling, and some say no change … at that point, Santer’s and your claims that the models somehow are right and the data is wrong rings hollow.

In any case, whether the UAH data is correct is not the main point. The main point is that the models disagree with each other and with the data. If they all said the same thing, and the data was different, I’d consider that the data might be wrong. But they don’t. One says huge warming, one says huge cooling, the rest are scattered in between. From that huge spread, we’re fools to draw any conclusions at all.

I already discussed this question of this report. Pielke resigned because he felt that it was too biased. Christy stayed on, and pointed out the limitations of this report. Have you been following this discussion?

Regarding the UAH data, you say:

Ummm … jshore … you say the current trend is 0.123°C per decade, and the trend in 2001 was 0.044°C per decade, as though you expect them to be the same. Trends change with time, it proves nothing that the trend was different 1979-2001 than 1979-2006. Pick up a good statistics book and read about trends, confidence intervals, and margins of error.

There are two issues here, confidence intervals, and the published margin of error. The error found by RSS in the UAH data was within their published margin of error, meaning how much further error they expected to find in their calculations.

Confidence intervals, on the other hand, are a statistical measure of how accurate a measured trend is. Especially in short datasets such as the MSU data, our estimate of the trend at any given moment is uncertain. This has nothing to do with whether there are errors in the dataset, or the expected size of those errors. It has to do with estimating from a small sample.

If we only average 10 people’s weight, will that give us the average weight of all the people in the world? No. But if we average 100 people’s weight, we’ll be closer, and if we average 100,000 people’s weight, closer yet. A confidence interval just means that we can have, say, 95% confidence that the average weight of everyone is say 150 ±30 pounds if we weigh 200 people. It is a measure of the accuracy of our average (or our trend line in this case), and depends in part on how much data we have.

Next, you say “The v5.1 value was 0.088 ± 0.05 C per decade.” I fear this is meaningless. A trend only has a value over a particular interval, generally but not always measured from the start of the dataset. There is no “v5.1 value” for a trend, there are only trends over specific intervals.

Your whole argument, in short, makes no sense.

I bust people who don’t believe in AGW as hard as I bust AGW promoters. I don’t believe much of what anyone says these days, science has become too politicized and polarized to put much trust in anyone’s work.

Spencer and Christy’s work, while it contained minor errors that have been corrected over time, still shows less warming than the ground based record. As you point out, that’s what people pointed out years ago, and it’s still true. The conundrum has not “ceased to exist” as you claim, the discrepancy between the values and the variations of ground based temperatures and lower tropospheric temperatures is still of interest. That’s why Santer unleashed his models on the question, because the conundrum still exists.

Next, Michael Mann’s work (on the famous “hockeystick” reconstruction) was not the issue that got him an invitation from Congress. It was his refusal to reveal his results and his methods. He even had the balls to claim that the normal scientific practice of asking to see how he came to his conclusions was “intimidation”, and refused to divulge what he had done or how he had done it. He eventually had to issue a Corrigendum to Nature magazine to try to plaster over his egregious actions. Like you, I much prefer to let the normal scientific practice work — but when a man thinks that the normal scientific practice is “intimidation”, more drastic measures may be needed.

Next, Mann has not had a single one of his findings “largely confirmed”. The issues were discussed in the U.S. National Research Council panel on Surface Temperature Reconstructions [North et al 2006 or the “NRC Panel”], the Chairman of the U.S. National Academy of Sciences Committee on Theoretical and Applied Statistics and associates [Wegman et al 2006], and the exchange in GRL between Huybers and McIntyre and McKitrick (Huybers 2005; McIntyre and McKitrick 2005d - Reply to Huybers). In all cases, his work was found to be badly flawed, and his main critics (Steve McIntyre and Ross McKitrick) were found to be correct.

The NRC Panel specifically endorsed and agreed with McIntyre and McKitrick’s key criticisms of Mann’s work:

• of the errors in MBH principal components method (p.85, 106)

• of the inappropriate reliance on bristlecone pines as an essential proxy (50,106, 107);

• of inappropriate estimation of confidence intervals (107);

• of the failure of MBH verification r2 statistic (91,105).

If you have any issue with these findings, please explain for us how they are wrong. In other words, the Panel found that Mann’s method was flawed, that it relied on faulty proxies, that it did not correctly estimate confidence intervals, and that it had no statistical significance.

The Wegman Report said (emphasis mine):

Subsequent studies that have claimed to “confirm” Mann’s work have relied on variations of his methods, and have used the same bristlecone proxies condemned by the NRC panel.

Stick to a subject you know better, jshore, you’re way out of your depth regarding Mann. He lied about what he did, not just made errors, but lied about it, and then tried to conceal his work to cover his tracks. That’s why Congress had to get in on the act.

w.

Well it is perfectly obvious that Intention is an old time programmer who has worked on computer modelling, and also taken a good look at the climate modelling area.

I am an old time programmer, also an economist, and have been forced to do a few prediction systems. When I hear the words ‘computer model’ I reach for my bullshit swatter.

I am also very alarmed when I see anyone relying on ‘computer predictions’.

Our current situation is that we have a very complex entity, let us call it ‘Earth’, being blasted with an enormous amount of energy from another entity that we will assign the name ‘Sun’. ‘Earth’ is emitting CO2 and methane as if it had been on a diet of solid Vindaloo for the past few weeks. We contribute a bit.

According to our astonishingly innacurate measurements, in the last two seconds (which is about the relative time since the last ice age) we have observed a number of falls and rises in temperature - and it looks as if a fairly large bit of ice is melting, but on the other hand we don’t really know if it is - it might just be returning to normal.

We have a bunch of people who are not going to give up pumping out CO2 (let us call them ‘India’ and ‘China’), and will probably develop a taste for beef, so we will have even more herds of deadly methane producers.

So our answer is to set up ludicrous trading systems for around 1% of what is perceived as the problem, allow our glorious leaders to tax us yet more, and provide research grants to a bunch of ‘scientists’ that I would not trust to run a lemonade stand.

The rational approach would be to calculate the worst possible rise in sea levels (we can handle a balmier Winter) and start looking at setting up critical infrastructure on higher ground. We could also train up a good bunch of civil engineers and ship them out to Holland to see how they cope with things.

Finally, computers do not model things. People do. A computer model is no better than the sum total of a load of assumptions by people who cannot program, turned into code by people who are sufficiently cynical not to disabuse the ignorant.

How I laughed at the ‘Flat Earth’ missile tracking system. That one was a classic.

Well, I am very impressed. So, why don’t you submit your results to a journal for publication? In the meantime, what we have is a paper you are attacking whose basic conclusion has echoed by the report of the U.S. Climate Science Program. [The fact that one member with a large ego resigned from this committtee does not invalidate the entire report, as you seem to believe.] So, what is comes down to is whether we should believe a report written by experts in the field, including representatives of both the UAH and RSS data analysis groups, or do we believe some guy on a messageboard who is not even in the field, let alone an expert in it?

Repeating a lie again and again does not make it true. I have never said that the models shouldn’t be tested. I merely said that imposing a bureaucratic testing framework that is unsuited for testing of models in the physical sciences is not useful and is also unprecedented. I think the scientists working in the field can individually and collectively decide how best to test their models…and how best to compare the many different models that have been independently written and tested.

Also, from working on modeling in the physical sciences for the last ~25 years, I understand that models are never perfect and can always be improved but that doesn’t make them totally useless. You seem to have the attitude that if you can find any way in which the model results differ from reality, that invalidates using the model for anything. This is, of course, a recipe for perpetual inaction…which is really your whole agenda anyway.

You have quite a bit of nerve, I must say. Recall that it was you who made the claim, “It is notable that the errors which have been found have been smaller than Spencer and Christy’s confidence intervals for their data.” I just asked you to back up that claim because, while I admitted that the evidence in their file listing the updates that you linked to did not make it possible to definitively show it to be wrong, it certainly made it seem rather unlikely that it is true. In order for it to be true, either they must have had huge confidence limits on earlier versions of their results or the adding the 2001-2006 data to the 1979-2001 must have significantly increased the trend (which would imply not only that warming has continued apace, contrary to Lindzen’s claim, but that it has accelerated quite a bit). And, as I noted, when you go back further than 2001, even the sign of their trend has changed.

Maybe so…but I have seen no evidence of this. And your the next sentence that I quote of yours is a counterexample to your claim.

I love how you call Spencer and Christy’s errors minor! The errors changed the freakin’ sign of the result. I.e., they originally claimed cooling. And, now they claim so much warming that their confidence intervals include the possibility that the troposphere is warming just as much as the surface temperatures. Are you claiming this dramatic shift is all or mainly due just to having a longer record for getting the trend line over?

As for the conundrum ceasing to exist, the discrepancies are now so small that they are within confidence intervals…which is why the U.S. Climate Science Program report concluded that on a global scale, there is no discrepancy. (In the tropics, they noted there still is some discrepancy of statistical significance but that they judge it more likely to be due to remaining errors in the data than to deficiencies in the models.)

There is really no substance for me to respond to here, You continue to make unsupported claims and to say that you don’t trust scientists and computer models. Fine, if you want to go back to the Dark Ages, be my guest. Just don’t expect a hell of a lot of us to follow you there.

Some adaptation is necessary. However, others who have actually looked into this believe that we must engage in mitigation too. And, it is rather ironic that you talk about Holland given that they, with about the most experience in dealing with sea level issues, are in fact one of the countries pushing hardest on the issue of climate change. They are not saying, “Let’s not worry about it because we can adapt just by building more dikes.”

Yeah, I must admit that throwing out Michael Mann’s name to a ClimateAudit regular is sort of like throwing raw meat to a dog. And, I am sure you have spent way more time than me going over it as it has become somewhat of a fixation with the ClimateAudit crowd.

This is one group’s interpretation. The question goes to the issue of just how much time and energy one has to invest with people who want to know all the gory details of what you did and whether you are required to make your actual computer code available or just explain the algorithm. (The opinion of the National Science Foundation on this matter, by the way, is apparently pretty clear that code is actually intellectual property. However, in the aftermath of this, I believe Mann has decided not to stand on that principle any more although he did for a while.) Other groups did manage to reproduce what Mann et al. did without having any special access. And it is worth noting that I believe there are claims by the RSS group that Spencer and Christy were not completely forthcoming either.

In regards to your statement of the NRC panels conclusions, there was in fact something for everyone in that report but I think you are missing the forest through the (Bristlecone Pine) trees when you portray the North report the way that you do. [And, the Wegman report is of little relevance as it just addresses a very narrow statistical issue that does not actually have a significant impact on the results, as was asked of Wegman and co. by the Congressional Republicans.] By the way, as a general point, your references to pages in the NRC panel’s report doesn’t seem to correspond to the version online…maybe it was an earlier version. Just so everyone is on the same page, here is the report.

I agree with you that the NRC panel had some specific criticisms of Mann’s work, such as the confidence intervals being too small. This is, of course, not surprising. As you noted in reference to Spencer and Christy (and I paraphrase here), science proceeds by steps and Mann eta al.'s first paper, while pathbreaking, was not perfect and subsequent papers by them and others have improved upon them.

However, overall, the NRC panel endorsed Mann’s main finding except with less confidence (and less confidence that they could even make a confidence estimate). In the summary they state:

They then go on to say how the evidence is very good back to 1600A.D. but the uncertainties in the proxy data are larger before that and that is what limits their confidence.

They also add the important point that:

Well, first of all, since the issues with Mann’s method were largely in the details of the way it was implemented, this statement that they were variations of his methods does not mean that they necessarily suffer the same problems. Furthermore, as is noted starting on p. 113 of the report, Osburn and Briffa has used a completely different technique of analyzing the proxies and arrive at the same conclusion as Mann et al. regarding the anomalousness of the late 20th century. I believe that their technique has also been shown to be robust to the removal of up to 3 of the proxies they use (no matter which 1, 2, or 3 are removed).

In regards to the bristlecone pine proxies, the NRC panel notes on page 116 that the Moberg reconstruction does not use tree ring network proxies at all for century and larger-scale changes. (And, by the way, I never found where in the report where they “specifically endorsed and agreed with the criticism of the inappropriate reliance on bristlecone pines as an essential proxy”. I would say that they mentioned possible issues with bristlecone pines and agreed that it was an important area for further research.)

Look, temperature reconstructions from proxy data are difficult and there can be no doubt that Mann et al. was much closer to being the first word on the subject than the last. It is worth noting that the 2001 IPCC report labeled the Mann claim that the late 20th century was the warmest period in 1000 years as “likely”, which in their parlance is defined to mean an estimated 66-90% probability of being correct. I don’t think all of the discussion and additional data that has been generated since would significantly alter that conclusion…some, such as the specific issues raised in regards to Mann et al., might tend to lower the confidence, while the additional reconstructions would tend to raise it.

By the way, it is worth noting that even fellow Republican Congressman (and Chair of the House Science Committee) Sherwood Bohlert felt that the investigation that Barton initiated was intimidation and was not justified by the facts:

And, this was from a fellow Republican (although admittedly one of the less ideological ones).

jshore, since you ask, my advice would be don’t believe anyone in the climate science field. Look at the evidence yourself, and make up your own mind. As you know, the whole field of climate science is incredibly polarized and politicized. As such, the claims from either side are extremely suspect. You’re a modeler … why do you believe a group of models whose results vary so widely between the models? Take a look at Santer’s Figure 2E, for example. The three datasets (GISS, HadCRUT2v, and HadCRUT2v subsampled) are all in quite close agreement, showing a warming of 0.1°C/decade.

The model results, on the other hand, range from about four times the observed warming, to actually showing cooling. Now, without referring to scientific committees or refereed journals or distinguished scientists on either side of the aisle, you’re a modeler — would you trust those models?

My apologies, you’re right, you didn’t say we should test the models, you just said we shouldn’t test them rigorously … however, in a mission-critical environment, that is the same as not testing them.

You say, let the scientists decide how to test their own models … I suppose we should let the stock traders audit themselves too, and hey, why do we need the GSA to investigate government agencies, they can do the job themselves … I fear you have more trust in people than is justified by either history or everyday experience.

Also, it’s a mystery to me why you think the usual, normal V&V and SQA method of testing software, a method that is taught in colleges, is a separate field of scientific study, and is used for all mission-critical software, is somehow “bureaucratic” … you don’t think it’s bureaucratic when it’s used to test the software for the jets you fly on and the subways you rely on, but when it’s applied to climate models it’s suddenly “bureaucratic” …

Now you’re putting words in my mouth. I did not say the models are useless. I said that we should not make billion dollar decisions based on the models until they are appropriately tested. For billion dollar decisions, “appropriately” means a whole lot more than they have been tested to date. I have no agenda for inaction, I have proposed action, which is to test the models. I have also said that if there is a cost-effective way to cut CO2 emissions in half, lets go for it, and I invited you more than once to propose such action … still waiting …

The most useful action we can take is what are called the “three R’s” … reduce, recycle, and reuse. However, neither this, nor any other achievable action proposed to date, will reduce future CO2 levels in any significant manner. So what should we do?

The dangers envisioned as a result of climate change (which may or may not be affected by rising temperatures) are all happening today. Droughts? We have them. Floods? Been there. Hot spells? Already happened. Cold snaps? Common occurrence. Temperature rises? Been happening since the 17th century. Hurricanes? Since forever.

What we can and should do is continue our normal practice of adaptation and alteration of our habits to insulate ourselves as best we can from these vagaries of climate. Reducing CO2 won’t end droughts, hot spells or hurricanes. Want to cut down on deaths from drought? There are a number of programs in place to do that, you can support them in a variety of ways. Cut down hurricane deaths? Stop building on low-lying barrier islands. Floods? Put in flood-control devices, and stop building on

All of these solutions to today’s problems (as well as to possible future increases in these same problems) have two thing in common — they require energy, and they cost money. All of the problems have one thing in common — they hit the poor harder than the wealthy. Thus, any proposed solutions to the CO2 question must not make the world poorer or cut down on the available energy, or we will only exacerbate the problems the poor are facing as we discuss this.

My apologies for the lack of clarity in my writing. I never considered the possibility that you might confuse confidence intervals on the errors, which we had been discussing, with confidence intervals on the trends, which were not the subject of the discussion. However, now that it’s been clarified, you still are going on about confidence intervals on the trends, viz:

I repeat again that you are confusing confidence intervals on the expected errors with confidence intervals on the trends, and that you are mistaking confidence intervals on the trends with a prediction of what the future trend might be. If you don’t understand this, I’m afraid I can’t help you. Yes, the sign of the trend has changed … so what? The sign of the trend of the global temperature changed in the period 1940-1970, from warming to cooling. So what?

Regarding whether I look as hard at pro AGW studies as at anti AGW studies, you say:

Please remember the famous scientific dictum, “Absence of evidence is not evidence of absence” … there are bogus studies on both sides of the aisle, and I call them as I see them.

Can’t answer that until you give me the relevant time periods and the relevant trends. I will note that, using either the RSS or the UAH data, both trends are negative during portions of the the record.

As to whether the differences are “minor”, see the RSS website Figure 7 shown here. The second panel shows the UAH versions 5.1 and 5.2, and as you can see, the difference is in fact minor.

As I mentioned before, if the conundrum didn’t exist, why is the Santer paper of interest? Why is Santer investigating the question? Why does Santer say that his results agree with one dataset, but not the other two?

And why do you depend on anything but the data? The HadCRUT3 global temperature dataset is online here . The UAH MSU results are on-line here. Why depend on the panel, when

When you do the math, you’ll find out that the reason that the two are within confidence intervals is that the confidence intervals are so wide. From the HadCRUT3 data, all that we can say that the global temperature rise is between 0.11 and 0.22°C per decade from 1979 to the present … which is so wide as to include the UAH data with or without errors, the RSS data, and the radiosonde data. Even if the MSU trend was statistically indistinguishable from zero, it would still be within the confidence interval of the datasets …

So is the Panel right, that the results are within the confidence interval? Assuredly? But what does this mean? The real meaning is that the dataset is too short, and thus the confidence intervals are too wide, for this to mean anything.

w.

Gosh, another appeal to authority … a common debating tactic, and a known logical error, so common that it has its own name. That don’t impress me much …

Here’s the situation. Michael Mann created the “Hockeystick”, which has been a mainstay of both the IPCC TAR and AGW arguments ever since.

Unfortunately, he didn’t reveal either the data he used, or the methods he used, to make the graph. Other scientists tried to replicate his results, but could not do so. Several scientists asked him, quite courteously, to show how he got the results. He refused.

Other scientists joined in, saying it was normal scientific practice to reveal how results were obtained, and that science depends on replication of results. One claimed result means nothing until other scientists can repeat the process and get the same result.

But despite a variety of requests, Mann continued to refuse to reveal his data and methods. He went so far as to say that for scientists to ask him for his data and methods was “intimidation”, and that he wasn’t going to be intimidated into releasing his data and methods …

Now, given that the Hockeystick is a central part of our billion dollar decisions and our views about climate change, a few questions for you, jshore. Not for some random Republican, but for you:

  1. Can science exist without replication of reported results?

  2. Was Mann right to refuse to reveal his data and methods?

  3. Is it “intimidation” for other scientists to ask a scientist to reveal his data and methods?

  4. Given that the revelation of data and methods is a bedrock tenet of science, what is the most likely explanation of Mann’s refusal?

  5. Given Mann’s continued refusal to reveal the data and methods underlying the “Hockeystick” graph despite repeated requests, and given the effect of the “Hockeystick” on public policy, what should have been done about his refusal? The “normal scientific methods” we both espouse had failed to budge him. What then?

My best to you,

w.

Some further notes on groups of scientists like the NAS making pronouncements on a topic. Often they foolishly don’t include statisticians in the group, with predictable results. Here’s the kinds of problems you end up with.

The January 2000 NAS study on climate change concluded that:

This kind of fact can be checked. The trend for the century was 0.064° ± 0.017°C (95% confidence interval) per decade. The trend for the twenty years 1980-1999 was 0.143 ± 0.100°C per decade. And just as they say, 0.143 is “substantially greater” than 0.064.

But is that difference statistically significant? Well, because the confidence interval is so wide on the two decade trend, the answer is no, the difference is not statistically significant. In fact, it’s a long ways from significant, the confidence interval for the 20 year trend (which goes from 0.043° to 0.243°C/decade) actually includes the century long trend of 0.064°C/decade. Not significant at all.

Have there been years in the record when the difference was statistically significant? Indeed there have. The 20 year periods beginning in 1921-1926 (that is 1921-1940 through 1926-1945) all show warming that is statistically greater than the trend for the century.

Which is why such panels need a statistician on board, to prevent them from drawing false conclusions. Yes, there was “substantial” warming during the two decades … but not statistically significant warming. For that, we have to go back to the 1920s-1940s warming, which was significant.

So jshore, to what do attribute that earlier and larger warming? Can’t be CO2, not enough change in CO2 that early. But if natural variation causes that kind of rapid warming, why should we conclude that a less significant recent warming is not natural?

And why should we believe a panel whose main conclusion, that recent warming is unusual enough to require an explanation, is not supported by the data? Like I said, I don’t trust anything in the field of climate science these days, from either side of the aisle. I run my own numbers. You’re welcome to run your own too, you may be surprised by the results.

For example, the 20-year temperature trend 1980-1999 referred to in the report was not the highest in the record, or the second highest, or even in the top 10% of the 20 year trends in the record. There are twenty one periods of two decades in the data which had a higher trend … and only one of the top ten trends occurred in the second half of the 20th century (1964-1983). The rest were either in the first half of the century or the 19th century.

What can we conclude from this? Not much, except that our understanding of the climate and the forces that drive it is rudimentary at best …

w.

@JShore

I’m trying to work out where you are coming from, for example :-

  • are you a mathematician ?
  • are you a programmer ?
  • do you have experience of working in financial markets ?
  • have you rubbed up with politicians ?
  • do you know scientists ?

This is not (I hope) an Ad Hominem attack, but I’m genuinely interested how you are not cynical about things that I find glaringly obvious.

I don’t trust statistics, samples are generally insignificant and it is easy to ‘clean’ data.
I don’t trust ‘experts’ unless I can arrive at the same conclusion independantly, in which case the ‘expert’ is redundant.

I do have an appreciation of the value or benefits of producing ideas that are appealling to mass psychology - and strongly suspect the motives of those who play that game.

It would be interesting to know what formed your convictions.

I am a PhD physicist who has worked both in academia (as a student and postdoc) and in industry (and also in the federal government as a summer job when I was in college). I have ~30 publications in refereed scientific journals (mainly, but not exclusively, physics journals with some in allied fields like optics and imaging science) and have served as a referee for ~75 manuscripts submitted to journals (mainly the Physical Review journals). As I am a theorist, but not for the most part a “pencil-and-paper” one, essentially my whole career has been doing computational modeling in the physical sciences.

Mind you, I am not claiming any particular expertise in climate science, as this is not my field…although I do have enough background to read papers in the field and usually at least get some reasonable understanding of them…and I have spent a fair amount of time doing this. I also think I have a lot of firsthand knowledge of how science works in general and modeling in the physical sciences in particular. I also understand the “lay of the land,” e.g., the role served by the National Academy of Sciences, how the refereeing process works, etc.

As for your other questions, although I am not technically a mathematician, a theorist in physics obviously has to have strong mathematical training. I would not call myself a “programmer” as that is a bit narrow but it is one of the skills required for my career. I do not have experience working in financial markets although I do have friends from graduate school who ended up going to Wall Street (being that it was a rather popular thing to do for theoretical physicists when the academic job market got very tight). I have some experience interacting with politicians…although not a whole lot.

Well, this must make your life quite interesting. For example, I assume you won’t take any medicines because they have all been studied for safety and efficacy in statistical studies…in fact, purely statistical studies (as opposed to the physical sciences where our studies and models tend to be more mechanistic). I also imagine that you would never go in for surgery as that involves a doctor acting as an expert…and presumably you never get on a plane where you have to entrust yourself to the expertise of the pilot.

And, I assume that you were strongly against the invasion of Iraq since that was based on “expert” intelligence selectively filtered through a very political administration…By your criteria, you would have been completely bonkers to believe it, as history has in fact shown.

Who exactly then do you trust? Clearly, as I tried to explain above, you must be willing to rely on experts to some degree. Don’t you think there are better and worse ways of doing this and better or worse sources of information?

Well, if you actually read that website, it explains that while it is a logical fallacy to appeal to authority to rigorously prove something, it is reasonable to appeal to authorities as evidence in favor of an argument. And, in fact, I think that citing an authority is much more widely regarded here on the SDMB than just claiming to know the facts yourself, as you are doing here in regards to Mann (and many other things). You might trust yourself more than you trust an authority but do you really expect us to trust you, who to us is no more than a person on a messageboard, more than we trust an authority?

For everybody else’s reference, here is the report that intention refers to. As for your results, I have several questions:

(1) What data set did you use for the surface temperatures and what estimate of the errors in the data? [It was relatively easy for me to find a couple of data sets on the web but not with uncertainty errors.]

(2) Have you tried to reconcile your results with their statement made on p. 63 that the trend over the 20 year period is between 0.1 and 0.2 C per decade. [Note also their nearly although not exactly equivalent statement that the trend is 0.25 to 0.4 C over the entire 20 year period made on p. 1.] On p. 40 in the report I did see it said that confidence intervals quoted in the report are at 95% levels unless otherwise stated so I assume that applies here although it isn’t completely clear since they just give a range rather than a mean with confidence intervals.

(3) Also note their warning on p. 59: “Uncertainties exist in assigning confidence levels to trends because of persistence in the data, which may or may not be due to the trend itself. There is no unique set of confidence intervals for the relatively short atmospheric temperature time series considered here. The estimated confidence intervals depend on the underlying statistical model that is used to describe the data, as well as on the exact period considered and the sampling interval (i.e., whether one uses monthly, seasonal, or annual means).” What sort of underlying statistical model did you use and what sampling interval?

(4) There is also nothing completely magical about a 95% confidence interval. Even if we assume that your numbers are right, changing the confidence interval to 90% would [assuming a Gaussian distribution as the underlying model] reduce your ±0.100 to something like ±0.082, which would then put lower bound for the 20-year trend at ~0.061…Or almost exactly where the mean of the 100-year trend is. Since a 90% confidence interval means there is a 5% chance of the trend being high of the confidence interval and 5% of it being low, then roughly speaking, the odds of the 20-year trend being lower than the 100-year trend are ~5%. [I say “roughly speaking” since I have been a bit sloppy here in assuming that the 100-year trend number is exact. Since its confidence interval is in fact much smaller than the 20-year one, this is not going to be too bad an approximation. However, given that the tail of a Gaussian is concave up, it will lead to an underestimate of that 5% number that I gave. So, the actual number might be, say, 6 or 7%.]