U.S. Dept. of Ag report: Climate change is damaging America . . . now

Fine, then look at the graph in the AR4 report that shows the same thing as Rahmstorf. Or read what the 2001 TAR has to say about the rate of expected warming being 0.1 to 0.2 C per decade. Frankly, I am sick of your bogus argumentation tactics that just waste everybody’s time here.

I have presented you with a peer-reviewed article and a peer-reviewed review of the scientific literature that both show the same thing. And, I have presented you with quotes from the 2001 source which make it clear what their projections were for warming over the next few decades. If you want to believe that you can read off a graph that was never made to show the level of detail that you are demanding from it, there is little that I can do. This is simply not honest tactics and really there is little more than I can say about this outside of the Pit.

Holy double standards, Batman! Look at the huge lecture I just got for having the gall to ask for a cite! (And, as I will explain below, your cite…as I suspected…does not actually support your claim.) Let’s go back to a previous thread, shall we? Here, you say:

Now, you might say that that case was different. And, indeed I would agree. It was quite different. First of all, I had promptly provided you guys with a cite which you didn’t like because it was a short review article that mentioned some results that came from “hard calculations” but didn’t actually discuss the calculations in detail, although it did provide references into the literature. Fine. So next, once I got around to it, I provided you guys with links to a couple of abstracts of papers actually presenting the hard calculations but made a bit of a snide remark about being forced to do all the legwork. That prompted your sanctimonious reply.

In the current case, your sanctimonious reply was prompted simply by my asking for a cite. In other words, you have managed to be sanctimonious about me not providing you with a cite that you found good enough and then not being completely cheerful when I then had to do all the legwork to get one to your exacting standards! And, then you managed to be sanctimonious again here because I had the gall to actually ask you for a cite at all!

I have to say that in my 8 years on the SDMB, I have probably never seen such an amazing example of someone who has one set of standards for himself and another for everyone else. It does help to explain a lot of things that I have noticed in our interactions on this Board though.

As to the topic of your post, well I did manage to find that Wall Street Journal article and let me quote the relevant passage:

Let us compare this to your statement:

Do you notice the differences? First of all, one refers to “data” and the other to “exact computer code”, which the NSF has reiterated is Mann’s intellectual property. Second of all, it is not clear from the quote that Mann is referring simply to “asking for [the code]” as intimidation. I am not quite sure what he is referring to but presumably it involves the sort of process by which McIntyre first tried to get the NSF to pressure Mann into releasing it and then, when the NSF told McIntyre in no uncertain terms that he was not entitled to it, managed to convince, or at least influence, the more wacko branch of the Republican Congress to get involved (while angering the scientific community and the more sane Republicans in Congress like House Science Committee Chairman Boehlert).

(By the way, you can correct me if I am wrong but despite McIntyre’s statement about believing he could find a lot more errors and the fact that Mann subsequently released his code, as far as I am aware there has been a strange lack of any further peer-reviewed articles from him on this subject since the one that was already published at the time that that Wall Street Journal article was written.)

Lol. Again you want to look at anything except the actual 2001 prediction.

It is the actual prediction unless you want to claim that both the IPCC and Rahmstorf have surreptitiously altered the prediction. (I suppose they then also went back in time and surreptitiously altered the 2001 report online to talk about the projection of temperature rise of 0.1 to 0.2 C for the next decades.)

But fine, you want predictions from the 2001 report; here I have managed to find the central predictions from each emissions scenario for various years. (I haven’t been able to find the whole envelope of the ranges for the different models.)

Note how the central projection for all the scenarios is 0.15-0.16 C for 2000, which you might also note is in remarkably good agreement with what you would read off the graph in Rahmstorf. By 2010, the projection ranges from 0.27 to 0.40 C for the different scenarios. (The Rahmstorf plot ends in ~2007 but these values look to be consistent with what you would get by extrapolating the dashed lines in that plot.) Note that the lower bound of the shaded area in Rahmstorf is well below the central value for even the lowest emissions scenario and hence the lower bound of the shaded area will clearly be significantly lower than 0.27 C even in 2010.

By comparison, your own analysis has shown that temperature trends (averaged over a 10 year period) have risen 0.26 C between 1990 and 2002.

I hope that you will now be man enough to admit that all the evidence is that the Rahmstorf figure with the projections is an accurate representation of what the 2001 predictions actually were and thus that you have been woefully incorrect in all of your claims that the actual temperature trends are ending up at or near the lower boundary of the projections. But, given your past history, I am not holding my breath.

If you don’t believe this, look at this figure from the 2001 report. Note that the black line is the central value for the IS92a scenario, which we know has the value of 0.15 C above 1990 in year 2000 and the value of 0.27 C above 1990 in year 2010. It is certainly clear that the lower boundary of the light blue envelope representing the model ensemble over all scenarios is significantly lower than this 0.27 C value by 2010. I would estimate it at about 2/3 that value, which puts it remarkably close to where Rahmstorf et al. have drawn it. So, even if the trendline were to turn out to be perfectly flat between 2002 and 2010, we would still not be particularly close to the lower boundary of this envelope.

As I noted, the Rahmstorf graph is not faithful to the IPCC graph.

As far as the prediction table goes, I will study it.

In fact, the graph is Rahmstorf is very faithful to the IPCC table. I just printed out and measured values off of the graph…and I got the value for all the scenarios in 2000 to be 0.16 C (above the 1990 value) in very good agreement with the table’s value of 0.16 C for all except IS92a which is 0.15 C. I think extrapolated the scenarios shown on that graph to 2010 and got values ranging from 0.29 to 0.41 C for the central values for the different scenarios, in good agreement (considering the errors in extrapolating and reading off the graph) of the values in the table of 0.27 to 0.40 C.

Perhaps, but again . . . I’ve been talking about the IPCC graph. The one on the last page of the 2001 SPM.

If it turns out that the graph is wrong, then so be it.

Well, my whole point has always been that you had no business reading off numbers from that graph because it was impossible to do so with sufficient accuracy given the scale over which that graph was made. It was not so much that the graph wrong as completely inappropriate for the purpose you were trying to use it for…and, the only reason to use it for such a purpose was because you were able to interpret it to give the answer that you wanted to get.

Nonsense. Just look at the graph. Perhaps you are blinded by your agenda, but my point is clear enough from looking at the graph.

If any lurkers are still reading, just look at the graph.

Lol. More likely the IPCC produced that graph to give the impression it was trying to give.

Whatever…I just hope that we have heard the last of your completely incorrect claim that the current temperature trends are anywhere close to falling outside of the lower bound of the envelope of the IPCC 2001 report projections.

:shrug: the claim is not incorrect, at a minimum if you go by the graph the IPCC chose to feature prominently in the Summary for Policymakers. I concede the possibility that the IPCC graph misrepresents its actual predictions.

brazil84: After studying that IPCC graph closely, I realize that there is in fact a small error in it that, along with your desire to use anything that you could find to get the result that you want, caused you to be led astray. The error is that the line that they implicitly mark as being 1990 is not really 1990…as can be seen by carefully measuring. It is more like ~1983. And, because of this, when they say the zero in the temperature is at 1990, it isn’t actually so…that 0 is around 1983. (My guess, in fact, is that the zero might be set by the average temperature over the period 1960-1990, a period commonly used to measure the anomaly from.)

In fact, if you really blow that graph up tremendously, you can see where the black curve starts in 1990 (even though it is mainly obscured by the red curve on top of it and it indeed starts several years later and at an anomaly value of something like 0.08 C). This error is basically completely negligible on the scale of what the IPCC is trying to show on that plot…But, if you take that plot beyond its intended purpose and try to use it to zero-in on predictions for a very short time period over which the temperature has not yet risen much from its 1990 value, the different in the zero points is significant enough to make a difference.

I checked and did not see this “small error.” In any event, I already conceded the possibility that the IPCC graph misrepresents the IPCC projection.

Is this “naïve method” a reference to climate models? I haven’t seen it used in that context before. It sounds like a misunderstanding of what the model is supposed to do. You want more than just to predict the average temperature in the next year – think about it – what use is a model that only predicts next year and nothing else? If you read the PDFs linked to from the Met Office website (here for easy access since that was a while ago) you’ll see that the point is not only to predict next year’s temperature but also to perform a successful hindcast. The combination of the two allows building better models.

Much mockery is devoted in certain (non-scientific) circles to hindcasting, but the derision conceals an important fear – if a model forecasts the future with an acceptable level of accuracy, it may be a fluke or accurate for the wrong reasons. If a model both forecasts the future with an acceptable level of accuracy and can reproduce past observed temperatures given the initial conditions that preceded them, it strains credulity to claim the model is a fluke or right for the wrong reasons. That’s why the claims of “tuning” the models or somehow faking it so that the model matches the hindcast come up regardless of the lack of evidence for faking it/fraud and the obvious logical problem: if the hindcast has been faked, why should the forecast have any level of accuracy at all? And if fraud produces forecasts with an acceptable level of accuracy, how would anyone tell the difference between that and legitimate model-building?

The naïve method does not lend itself to building better models. Perhaps that explains its popularity.

Your reference to the model being discarded is also a hint that you may want to think more about the purpose of the models. Much can be learned from earlier models to improve the performance of subsequent models. Models are not like hammers in a workshop – they’re maybe more like cars. Their makers tinker with them in each year, adding improvements, eliminating features that don’t work, with the idea of building a better model. Very few people bother trying to tinker with building better hammers because hammers are pretty simple and do the job fine. Global Circulation/Climate Models are not simple and we want them to do a better job. They’re currently pretty good with temperature, but they could especially improve in the areas of forecasting precipitation and better resolution for smaller geographic areas.

Hi matt, it’s nice to have a new participant in these debates. First of all, I don’t know that science on either side of the iris hypothesis has been suppressed: Spencer et al. 2007 (which I haven’t had the opportunity to read yet) didn’t seem to have any problems getting published, and [Lin et al. 2002](” Serials Solutions: Invalid Library.) seemed to have no problem suggesting that the feedback from clouds might be weakly positive rather than strongly negative. Sorry if this is covering ground already addressed by other posters – as you can tell, I’ve got a lot of catching up to do.

Lol. Since long before there was the IPCC; long before Michael Mann got his PhD, and long before James Hansen testified to Congress, people have been thinking analyticially about forecasting in general.

The “naive method” is not an actual forecasting method. It’s a benchmark or standard for testing forecasts. The thinking is that if a forecasting method cannot outperform a “naive model,” then it’s probably pretty lousy.

If you don’t like the word “discard,” then I’ll put it a different way: If your prediction model can’t outperform a “naive model,” then you probably need to go back to the drawing board.

Which is fine. It’s worth spending some amount of effort trying to forecast climate. Whether we will accomplish anything worthwhile is another issue. And for now, it would appear that our models are not particularly skillful.

If somebody has a model which predicts future temperature anomalies with a reasonable level of accuracy, that would be very interesting. So far I have not seen such a model (outside of normal meteorology).

:confused: I have no idea what your point is here. Certainly it is worthwhile to test models. The naive method is a useful test.

Paging brazil84!

I’ve mentioned this before - it’s a pathetic ad hominem to use the “Laughing Out Loud” acronym to try to convince people you find opposing arguments laughable. No one reading this believes that you’re laughing or that anything funny was said.

I can think of a good reason not to use such a naïve method for climate forecasting (excellent pun, by the way :smiley: ) - imagine you have a model that does an excellent job of representing the physics and atmospheric chemistry of greenhouse gases, aerosols, solar forcings, and land use impacts. So you make a forecast using such a model. Then one of the many random events that can effect climate occurs - a volcanic eruption or El Niño, say, putting the year’s temperatures closer to the year before than to your forecast. In this event, your naïve method suggests climatologists should discard the model regardless of its faithfulness to all the above forcings.

A better method, I would suggest, is to see if the model’s forecast fits within its confidence interval. But perhaps you have a reason why you think the naïve method is better, I’d love to hear it.

How is it useful? What can you do with the naïve method besides discard the model?

It would depend on how many years this went on for. Perhaps one year can be forgiven. Seven or eight years is a different story. If a climate model performs consistently poorly over a long number of years, it’s performing poorly. Besides, if you claim that your climate model would have been great, except for the influence of random events such as volcanos or El Nino, you aren’t making a very strong claim. Because random events are part of the game.

Your method doesn’t really allow one to guage the possibility that one is right but for the wrong reason. For example, I could put together a climate model based on the assumption that temperatures are influenced by the average lengths of womens dresses. With judicious choice of confidence intervals, my model might turn out to be correct, year after year after year.

Getting rid of lousy models is obviously useful.

jshore, you say:

Well, I went away from this thread for a while to let my blood pressure cool down from this egregious attack. I’m back, a bit cooler, to say that your claim that I have a “double standard” is a stack of horseshit, and you can stuff your claim that I am “sanctimonious” straight up the Moderator’s ass.

Take a deep breath, jshore. The reason you got a lecture was that you had previously asked for the identical citation right here, and I had provided it to you.

This was not one of those “I provided the citation somewhere that you might not have seen it” cases, either. I provided the citation in response to your request, and I quoted exactly what Michael Mann had said, and you and I discussed the citation and the topic in general.

Now you want to pretend that you never heard of it, or that I had never provided it. Or perhaps you’re just having trouble with your memory, that’s fine, it happens sometimes.

But don’t attack me just because you’re too damned [snip] to remember what the f*ck you’ve talked about before. It just gets my blood pressure up, and makes you look stupid and vicious. I know you’re not … but it sure makes you look that way.

w.

PS - Having gotten that off my chest, look, jshore, the issue is not whether the data and the computer code belong to Michael Mann (likely not , but that’s a side question).

The issue is whether a study which cannot be replicated for any reason should be considered a scientific result. Sure, M. Mann can refuse to show the code … but he can’t both conceal the code, and then go on to claim that his results are science, give them prominence in the IPCC TAR, and expect us to make billion dollar decisions based on those results. No replication = no science, as I am sure is the rule in the scientific discipline where you work.

To reiterate what I said in the previous post, “A scientist refuses to say how he got his results, and you defend him … would you even consider doing this if the subject were cold fusion?” Seriously. If a guy claimed that he could produce cold fusion, and wanted your company to spend millions of dollars based on his work, I suspect that you, jshore, would be among the first to ask if anyone else had replicated the guys work.

Now there’s a double standard …