I just took note of this comment, and I’m very curious about it. I think it illustrates rank ignorance of science and statistics, but before making such a presumption and going off on an unnecessary tangent, I’d rather get clarification from you.
What exactly do you mean here? How has all of the work been “postdicitive”? How has this differed from other publications in other fields?
I don’t know what the U.N. generally does. However, in one of my posts above, I quoted from the response by the IPCC scenarios authors who noted that most of the scenarios in the literature use MER only. (As I recall from reading the body, the number is something like only 9 out of a few hundred [200, 400? I forget] include PPP.) By contrast, in the IPCC scenarios produced, 8 out of 40 are reported in both MER and PPP. So, they are including PPP valuations for a greater fraction of scenarios than it is used in the literature reviewed.
At any rate, since you are so big on full disclosure of information by scientists and all, I think it might be good if you actually demonstrated a willingness to make use of it. I provided you with links to the full back-and-forth correspondence between Castles & Henderson and the IPCC folks that appeared in Energy and Environment. I also provided you with links to two other papers written by others subsequent to that. You can read these papers and draw your own conclusions.
I haven’t read all of that stuff enough to have a complete understanding of all the details because frankly it doesn’t really interest me to that degree. I have read enough to conclude that the statement that what the IPCC did was some egregious violation of accepted practices is B.S. which seems to have pretty much no support outside of the community of those specifically dedicated to trying to discredit the IPCC. There is a legitimate debate about what is best to use and, chances are, the community will continue to evolve toward using PPP more as better data for this become available, just as the IPCC apparently included PPP numbers in a larger fraction of their scenarios than was found in the literature that they reviewed.
I’m sorry, Hentor, I thought that you had been following the thread and would know that I have already given a very clear example of how this is a significant problem in the climate science literature here. The IPCC admits that the problem exists, but when a reviewer requests that the IPCC adjust their significance estimates to make them statistically correct, the reviewer is simply blown off … the IPCC is getting its hands badly burnt.
For another example, you could take a look at this RealClimate post by Michael Mann and Phil Jones, two luminaries of the AGW movement. In it, they make no adjustment for autocorrelation, saying:
Yes, it is certainly astronomically improbable … but only if you are foolish enough to assume you can use “stationary ‘normal’ statistics” when you are dealing with an autocorrelated dataset. I tried to point out the problem on RealClimate … but in their usual “scientific” fashion, they just censored my post, it was never published.
I can provide a number of other examples if you wish, but the fact that the IPCC and Phil Jones and Michael Mann are all busy denying that there is a problem and ignoring their burnt hands should tell you something.
Then I would guess that intention’s point is basically correct. But I would want to study the literature more carefully to be sure.
I started reading them – unfortunately, the link I was most interested in reading was no good. But anyway, I think you’re missing the point about disclosure by suggesting that I read the published papers. I would analogize published papers to the Sports Illustrated Swimsuit Issue: What is revealed is often interesting, but what is concealed is potentially vital.
Looks to me like there is legitimate debate about AGW too then.
I think the critical question is what exactly the scope of the null hypothesis is. If the null hypothesis includes the possibility of trends in temperature, then yeah, it’s obviously problematic to calculate a sample mean and variance based on a run of 30 years, then look at the temperature a few years later and say “Aha! A 5 sigma event!”
Aha. But now the question becomes, these “falsehoods” have now been, in a sense, enshrined and endorsed by British law. What effect might that have on this whole debate?
Well, I am not sure it will have much. It is worth noting, by the way, that as you noted the source you linked to is not the most objective. Here is a news article about the ruling that makes it sound like the judge’s conclusions were much more narrowly tailored than the site you looked at had implied and, in fact, there is really not that much in the judge’s findings of fact in regard to global warming that I would argue with. For example, it is correct that the most recent views on the ocean conveyer is that it is very unlikely to shut down (at least in this century). It is true that the graphs showing the strong correlation between CO2 levels and temperature over the last 650,000 years do not show which is cause and which is effect (and, in fact, that it is understood that it is the warming that initially triggers increases in CO2 and not the other way around…although it is also generally understood that the CO2 then magnifies the warming…i.e., that it is a symbiotic relationship). Likewise, it is true that it is not possible to blame any one event, such as Hurricane Katrina on global warming…All that one can say is how it changes the frequency or intensity of hurricanes (and this itself is still a very active area of study).
A lot of the debate presumably hinges on what Al Gore actually said or implied. For example, I thought he was careful not to directly say that Katrina was caused by global warming (rather than that global warming is likely to lead to increases in hurricane intensity) but he may have left the impression that this connection was there. But, let’s face it, Al Gore is not a scientist and although many climate scientists in the field seem to feel that Gore got the facts basically correct, they would presumably admit that he was less rigorous than they would be at, e.g., explaining the degree of confidence behind various possible outcomes, etc.
It depends on two things: how high the temperatures are, and how large the historical variations are.
To answer the question of whether any given temperature is unusual, we have to take autocorrelation into account. This is where Michael Mann and Phil Jones went way off the rails.
The difference between the answers given by the “stationary ‘normal’ statistics” used by Mann and Jones and statistics which account for autocorrelation can be huge. For example, I was looking at the correlation between two 125 year annual temperature series the other day. Using stationary ‘normal’ statistics, the odds of the correlation between the two occurring by chance were one in 10,000,000,000,000,000,000, which is highly significant.
After adjusting for autocorrelation, on the other hand, the odds of the correlation occurring by chance were only one in 10 … and this is in no way unusual with the level of autocorrelation commonly found in temperature datasets.
In the RealClimate posting by Mann and Jones, they said the odds of the unusually high April temperature in Svalbard being a natural variation were less than one in a million. They used this to claim that very unusual global warming was happening. But in fact, when calculated correctly, the odds of that temperature being a natural variation were only about one in three hundred.
Now, one in three hundred still sounds pretty impressive … until you consider that the record in question was a monthly temperature record which covered about a thousand months. In such a record, then, we would expect to see around three such “unusual” events … which makes finding one of them not unusual in the least.
This is the kind of bad statistics that form the base of the AGW claims. And despite the fact that the IPCC has been put on clear notice that it is a huge problem, this is the kind of bad statistics that the IPCC insists on continuing to use, in defiance of accepted statistical practice.