Temperature Record of the Last 1000 Years

As I indicated above, based on your description of the problem, I don’t see any best way (or any way) to do the problem.

Any other questions?

As you requested, I’m moving this topic here.

Well then this appears to be where you’ve got a problem.

I would probably advise creating a 3D histogram where X and Y are the height and time (respectively) and Z the frequency of results for each study. For any time slice, you would expect to see some sort of curve (perhaps a bell curve) of distribution, leading one to pick the highest point of the curve for each time.

More simply, you could average the results for each unit of time together.

Then you might consider attaching a slight moving average to clear up local fluctuations.

Or do you think that the results of doing such would tell us next to nothing?

aptronym, you say inter alia:

It is not clear what you mean by “historically” or “current warming”. Could you be a bit more specific? AFAIK, century long warming trends have not been as high as you say. There are some 59 hundred-year periods in the HadCRUT3 global temperature record, starting with 1850-1949 and ending with 1908-2007. None of those periods have seen a 1°C warming, so I’m not clear why you say that current warming is 1°C/century.

And prior to that we don’t have global temperature records, so I also don’t know what you mean by “historically”.

All the best,

w.

Correct. Because you will not know the amount of systematic error.

Any other questions?

Sure. First of all, this was a minor point in my other thread, so I wasn’t precise with the numbers.

In the context of this thread, I’ll be more precise - the instrumental record shows a 0.8[sup]o[/sup]C/century increase from 1900-2000 CE and the satellite data show a 1.6-2.4[sup]o[/sup]C/century increase from 1980-2000.

Meanwhile, temperature reconstructions from the glaciers (Science 29 April 2005:
Vol. 308. no. 5722, pp. 675 - 677) suggests a historical cooling/warming of +/- 0.2[sup]o[/sup]C/century from 0-2000 CE and tree ring chronologies (Science 22 March 2002: Vol. 295. no. 5563, pp. 2250 - 2253) suggest a historical cooling warming of +/- 0.3[sup]o[/sup]C/century from 800-2000 CE.

My point was that while absolute temperatures now may be comparable with a Medieval Warming Period (which appears both in the glacier and tree ring records), the rate of change is not.

And that the rate of change is more characteristic of effect on humans.

brazil84 claimed he has a wonderful response to this argument, which could not be contained in the margins of the other thread.

Thank you.

As I mentioned earlier in the thread, many of the proxy measurements that have been used have been called into serious question. Among other things, there is a “divergence” problem, i.e. many or most proxies don’t track well with recent instrumental records. If the proxy measurement isn’t picking up the slope of the current warming trend, it stands to reason that there have been other warming trends in the past that they don’t pick up.

It would appear that if you don’t splice the instrumental temperature record onto the proxies, you get little or no hockey stick.

What was so difficult about writing that, that you took three posts in the other thread to insist that I take it to this thread?

None of the arguments presented in “brazil84’s Global Warming Thread” rely on proxy temperature data.

Elucidate, please.

Well, okay, I guess it is technically true that those willing to wade through the full report that he linked to will be able to see how he quoted from it quite selectively.

The report is over 100 pages long. By going through and picking out a quotation here and a quotation there, some of which are not entirely clear, cobbling them together and putting your interpretation on them, you have managed to come to conclusions that are at odds with the actual conclusion of the report. (This in the past had led you to label the report “schizophrenic” as an excuse…when a more likely conclusion is that the problem is at your end.

The Wegman report, which was commissioned by the Republican Majority on the Senate Committee, addressed only a very narrow question about the Mann methodology and did not deal with any broader questions about the science. In fact, as his testimony before the committee made clear, Wegman was not even remotely qualified to weigh in on the broader questions since his knowledge of climate science was pretty pathetic. Yes, he is a good statistician but he is not a climate scientist and was only asked to answer a very narrow statistical question about the particular method that Mann et al. used in that particular paper.

Again, you are cobbling together facts that don’t go together to derive a statement that is at odds with what the NAS actually says.

Actually, the first study you cite is only from 1600 to 1990. But let’s take a closer look at the second study you cite, which covers 831 to 1992. (The point I’m about to make also applies to the first study.)

By doing a google image search, I was able to find what I believe is a graph showing the temperature reconstruction you have cited. (It’s apparently the blue line.):

http://www.wooster.edu/geology/tr/esper1.jpg

What’s important to note, for purposes of your argument, is that there is no dramatic change in the slope of the graph in the 20th century. This undercuts your argument that the rate of temperature change in recent times is unprecedented.

To get the result you have claimed, one needs to compare the proxy record from before 1900 with the instrumental record from after 1900. This is what I have called splicing. The fact that you need to splice to get your results suggests that the change in slope is a result of a change in data gathering method as opposed to a change in what’s actually going on in the climate.

Not only that, but it seems clear that the study you have cited did not pick up the rate of temperature change in recent history as indicated by the instrumental records. If the recent warming trend is not fully captured, why should we assume that there were no warming (or cooling) trends in the past that were not fully captured?

Bottom line: You are comparing apples and oranges.

And voila :slight_smile:

I didn’t want to get sidetracked in the other thread.

Oh really? Then what exactly did you mean when you said this:

I’m very curious to know.

Are you unfamiliar with the concept of systematic error? Or you just don’t see how it might apply to the situation you described?

OK, I see what some parts of the problem are. First, you can’t extrapolate from twenty years to a century, or from a short period to a long. For example, suppose this year is two degrees warmer than last year. Is that a “200°C per century warming”?

Second, I know of no instrumental data that show a 0.8°C/century trend from 1900-1999 (or 1901-2000, your figure from 1900-2000 is more than a century). Both the HadCRUT3 and the GISSTEMP datasets show only about 3/4 of that.

Third, comparing numbers without error estimates is meaningless. The 95% confidence interval on the 1900-1999 annual HadCRUT3 trend is 0.6±0.2°C. Note that this does not include the error estimate of the data itself, which is ±0.2°C (95%CI) for the most recent years, more for the earlier years

The 95% confidence interval on the 1980-2000 annual RSS satellite troposphere trend is 1.6 ± 1.5°C. Thus, statistically there is no difference between the two trends. Indeed, the satellite data is only barely statistically different from zero.

Here we start to get into real trouble. First, as I pointed out above, we can’t compare numbers without error estimates, and the proxy data you quote doesn’t contain them.

Second, the satellite data is only barely significantly different from zero. This is well within the range of both the tree ring and glacier data even without error estimates.

Third, the idea that glacier length (your first proxy) is related to temperature is highly uncertain. The same is true of tree ring data. Neither of these can be shown to be temperature proxies of any accuracy.

But regardless, the glacier length proxy shows ~50% faster warming from 1920-1940 than for 1980 on … which doesn’t exactly argue for human intervention. It also doesn’t agree with the HadCRUT instrumental record. Finally, the glacier record is far too autocorrelated to estimate the error in the result, which makes it useless for comparison purposes. Average lag 1 autocorrelation is 0.98 !!

In addition, 91% of the glaciers measured are north of 35°N … hardly ideal for a “global” reconstruction, since that is only ~20% of the earth’s surface.

Finally, the Esper paper you cite has too many problems for me to even begin to chronicle. See here for half a dozen threads covering some of these problems.

The takeaway message is that there are real problems comparing just observational temperature trends, and that comparing observational trends with proxy trends is a study in impossibility. You’re saying “But look! Look! The apples are redder than the oranges!” Yes, they are, but … so what?

w.

Thank you for the reply, jshore. You say:

Did you read the report? He was commenting on the statistical methods used in both MBH98 and MBH 99, and he said:

Now, you can fluff about by saying it was the evil Republicans who asked him, and he didn’t claim to be a climate scientist, and all the rest, but in his report he is extremely clear. McIntyre and McKitrick’s criticisms were “valid and compelling”, Mann’s work was “obscure and incomplete” and Mann’s conclusions “cannot be supported by his analysis”. HE IS COMMENTING ON THE STATISTICS. Whether he was asked by Republicans or is not a climate expert is immaterial.

Hmmm. I ask you for some evidence that the Mann method’s mining for hockeysticks “did not happen in this case” … your response is to attack my question.

Say what? I swear I thought you said some time back that you were a scientist.

You have claimed, without citation or evidence, that Mann’s “mine for the hockeysticks” method did not mine for hockeysticks in this case.

Provide some evidence for your claim, or let it go. That’s all I asked for. Don’t bother trying to confuse people by attacking me, or bringing in the NAS. You made the claim, not me, and not the NAS.

w.

PS - You still have not found a single factual error in the citation I linked to originally. You’ve brought in Republican committees, and how narrow the questions were, and whether Wegman is a climate expert, and whether you have time to understand the statistics, and attacked my statements, and made claims without facts to back them up … I call that kind of thing “table pounding”, because your actions remind me of the old lawyer’s saying:

I think the better question is whether you are familiar with the concept.

“A systematic error is any biasing effect, in the environment, methods of observation or instruments used, which introduces error into an experiment and is such that it always affects the results of an experiment in the same direction.”

So now, where exactly in the example of the falling marble is there an effect that will cause an error in a singular direction? If I have 100 distorted glasses, each different from one another (as specified), why in the world would the viewed result always skew in the same direction?

Sage Rat, the fact that the glasses are different from one another does not mean there is no systematic bias. I can think of a variety of ways that 100 different distorted glasses could give a systematic (bias) error. They might, for example, all be a bit thicker at the bottom than at the top, leading to a bias error due to refraction.

More important is the fact that a bias error is possible in virtually any measurement situation … including climate measurements.

There is a second type of problem which often rears its ugly head in climate measurements. This is where the error distribution is non-normal. It often arises because the climate measurements themselves are not normally distributed.

A third type of error comes in when we try to analyze datasets which exhibit long and short term autocorrelation. This type of dataset, which is very common in climate science, cannot be analyzed using normal statistical procedures.

Finally, unlike your example using gravity, many climate variables either are or appear to be non-stationary (their mean and other statistics change over time). When the data are not stationary, all bets are off. It is difficult, for example, to assign an error value to the average height of a growing child, because their height increases every day. Imagine your marble example on a planet where the strength of gravity increased over time in some complex fashion … what would your statistics tell you then?

In short, statistically analyzing climate variables has a number of pitfalls for the unwary. You cannot use standard statistical tools to do the job, because those tools are for what are called “stationary IID” datasets. “IID” means “independent identically distributed”, which most climate datasets are not. And although climate is many things, “stationary” is generally not one of them.

All of this makes the determination of error values quite problematic. It also leads to error values generally being under-estimated, because all of the problems I listed above lead to larger-than-expected errors.

w.

Actually, the question is whether the results will skew in the same direction on average.

For example, suppose that glass panes 1 through 10 cause the marble to appear 5.1, 5.2, 5.3 . … 5.9 and 6.0 centimeters off in direction A. Suppose that glass panes 11 through 100 cause the marble to appear 5.1, 5.2, 5.3 . . . 14.0 centimeters off in the opposite direction (direction B).

Suppose further that when people estimate marble paths, they tend to be an average of 5 centimeters off in direction B, and that their errors are normally distributed with standard deviation of 1 centimeter.

Well, in that case, if you take the 100 estimates and average them, you will get a number that’s off. You could re-use the glass and do a 1000 estimates . . . or even a million estimates. You would still be off.

And there’s no a priori reason to assume that the experiment you describe is not subject to this sort of systematic error.

The bottom line: If you take 100 observers who are wrong and average their observations, there is no general reason to think that you will get the right answer.

It reminds me of a fable popularized by the late Richard Feynmann:

Any other questions?

I agree with this 100%. And I think that Sage Rat’s mistake in his marble hypothetical is illustrative of what can happen when people who are naive in the areas of probability and statistics attempt to draw inferences and conclusions that rely on probability and statistics.

Although he seems to now fancy himself enough of a climate expert to sign on to this statement!

NAS report:

(bolding added) So, yes, there are some concerns about the data and the robustness of the results pre-1600 in light of the dependence on the Great Basin region data [as was already discussed in Mann (1999)] but, as for the method having a tendency to bias the shape of the reconstructions, they note that, while the method can in principle do this, it does not seem to do so in this case.

I’ve never claimed that Mann is the last word regarding temperature reconstructions or that there isn’t the need to get better data and to produce results that are more robust to removal of various data, as Osburn and Briffa showed in their work, how one can remove up to 3 pieces of data and still reach the conclusion that the 20th century was more anomalously warm than any period before. Of course, Osburn and Briffa are presumably not the last word either because I am sure that one can find faults with their data too. This is why the IPCC still only uses the word “likely” (>66% chance) to the claim that the latter part of the 20th century was the warmest of any comparable period of time over the last 1300 years.