My apologies, that’s where I originally got the document, let me chase it down … OK, try here.
The method consists in using the lag-1 autocorrelation to reduce the number of degrees of freedom. The formula is:
EffectiveN =
(1 - R - 0.68 / Sqr(N))
N * -------------------------------
(1 + R + 0.68 / Sqr(N))
where N is the number of data points, and R is the lag-1 autocorrelation.
The effect on the standard error of the trend is to increase it by
Sqr(N-1) / Sqr(EffectiveN-1).
I have tested this method by using it to calculate the standard error of both monthly and the corresponding annually-averaged yearly data. In general, there is little difference between the standard error of the two when using Nychka’s method, which is as it should be.
jshore, you have to learn to parse these folk’s statements very carefully. They are written and edited by half a dozen authors, and are crafted with precision.
For example, they say the models are “essentially independent” of the data … which means what, exactly? Either they are independent or they are not, there is no middle ground, it’s like being a bit pregnant, not possible. Either there is what is rather inelegantly called “data snooping” or not.
Here’s another example. They claim that the models “are not ‘tuned’ to reproduce the most recent temperatures”. I have to take my hat off to that statement, it’s a work of art. It sounds absolutely scientific, and it says absolutely nothing. I guess to be fair it does say something, it clearly means that the models are tuned to reproduce the “less recent” temperatures … but that says nothing either. A scientific statement has numbers. Are the models tuned to the data up to 1990? To 1995? To 2000? Each of these answers would mean very different things to our understanding of their results.
But that’s not my point.
With tuned models, until you make it clear whether you are working on out-of-sample data or not, you are not telling me anything. So in fact, they are saying the models are not independent of some of the climate data, while making it look like they are independent, without a single citation to back up or explain their claim, or even a number so it could be falsified. But that’s not my point
They also say “global sea level data were not available at the time” … not sure how that can be, since they are using satellite data since 1990 which is available in realtime … maybe their watches are set slow or something, but it is simply not true that 1999 sea level data was not available in 2000. However, true or not, this claim seems to indicate that whatever data the models are not independent of, they’re either not independent of temperature or CO2 data or both. Why do I suspect it’s not CO2?
These kinds of not-scientific, very carefully worded number-free pseudo-claims drive me spare. It is clear from the statement that the models are not independent of the data … but exactly where and how?
But that wasn’t my point either.
Next, it is worth noting that the models used in this exercise were not the GCMs which were used by the IPCC to forecast our impending doom. Instead, they are the results of a simplified model which the IPCC claims was “tuned” to the various GCMs. So they are the results of a model which is tuned to a model which is tuned to historical data … except of course it’s not tuned to the “recent temperatures”. Regrettably, the paper makes no mention that we’re not looking a model results, but a model of model results. But also, not my point.
My point was that (as near as I can tell using their pathetic references) they are using the IPCC Fig. 9-14 results, but they have made an important change in the graphic.
In the IPCC figure, the 1990 - 2000 jump in temperature is not all the model results of different scenarios. You can tell because it is a single line with no grey error bars. It is a result of a single scenario … which (as the IPCC is fond of telling us) is no more or less probable than any other scenario. So there is no expectation that it be accurate.
In fact, this scenario for the decade 1990-2000 was chosen in 1999, based on data up to that point. Now, Rahmstorf et al. are claiming that we should examine it as though it were a forecast, to see how well the IPCC can forecast temperature changes. But it’s not a forecast, it’s a hindcast. Which is why I say they are suckering people into believing they are comparing 15 years of forecasts and data, when in fact they are not.
Which scenario was used by the IPCC for the period 1990 - 2000? Well … all of them. All of the various scenarios have the same forcing 1990 - 2000. This forcing was known at the time to be low, but nobody worried about that, because it was in the past. Nobody was considering it to be anything other than a baseline scenario to give everyone a common starting point for the splitting into various scenarios in 2000.
After 2000, as you can see, the model results from the different scenarions go in different directions, the A1 and B1 and the rest all go their own way.
But before 2000, there is no such diversity.
This is because when the 2001 report was written, 1900-2000 was in the past. It is the starting point for the various scenarios. That’s why in Figure 9.14, there is no grey area surrounding the 1990-2000 results … so why have Rahmstorf changed the IPCC figure to include a gray area not shown by the IPCC? Inquiring minds wonder …
Unless, of course, Rahmstorf et al. were using a whole other list of model results, in which case all bets are off … but then they very carefully have not said which results they are using. It would have been nice, and trivially easy, if they had actually said where the model results were to be found in the 2001 TAR, instead of referencing the entire IPCC report and expecting us to guess which of the hundreds of model results they are talking about… but that kind of petty nonsense is to be expected in climate science. So it’s possible they are using some other model results from some unknown place.
For me, a climate scientist giving a citation like
is identical to a creationist saying “the answer’s right there in the Bible”. Well, maybe so, but where? Chapter and verse are necessary for science. When I see a citation to the entire IPCC report like that, danger flags start flying and warning horns start blaring. Real scientists give real citations, they don’t just wave at the IPCC Bible and say “The answer’s in there, you go find it”.
w.
PS - until I originally researched Rahmstorf et al.'s claims, I was unaware that they were not referring to the models used by the IPCC. Instead, they are reporting the results from a model of the models, called MAGICC, and not the actual model results (see 2001 TAR Appendix 9.1. Seems like the paper might have mentioned that tiny detail …
This whole “model of the model” concept is actually quite interesting. According to the IPCC, any of the various GCM models can be modeled with good fidelity using only six variables (forcing from doubling, temperature change from doubling, magnitude of warming needed to collapse the THC, vertical diffusivity, ratio of the equilibrium temperature changes over land versus ocean, and land/ocean and Northern Hemisphere/Southern Hemisphere exchange coefficients.)
The fact that the model results can be simulated using only these six variables brings up interesting questions, such as:
How are the differences in feedbacks simulated by MAGICC? None of those variables directly affects the clouds, for example. How are the differences in feedbacks handled?
If, as the IPCC claims, MAGICC can successfully simulate all of the models … why are we messing with the models? Why not just run MAGICC and be done with it?
If the models can be successfully emulated with only six variables, and yet contain hundreds of parameters, doesn’t this mean that they are wildly overdetermined?
As always, more questions than answers …
PPS - Using a single chosen baseline 1990 - 2000 scenario as the IPCC does, all of the scenarios and all the models are identical until 2000. The reason the modeled results of the scenarios are all low in 2000 is not from poor model tuning or any other such reason involving models as you speculate, jshore.
It is not that the models or their tuning are wrong. It is because the first decade of all the scenarios are identical and identically wrong (from the models’ perspective). That proves nothing about the models. All that proves is that the IPCC is not very good at picking a single scenario to represent reality. Which they know, and that’s why they give us a whole host of scenarios during the actual period of interest (which of course is the future, not the past).
To test the model responses to the various scenarios, it is useless to look at the single scenario that the IPCC happened to choose to represent 1990-2000. They could have picked any scenario for that, and they make no claims about the accuracy of the chosen scenario. To see the range of how the models respond to the scenarios, we need to look at their post - 2000 performance, when the scenarios are different.
The actual observed temperature trend 2000 - present is not statistically different from zero. Since the IPCC scenarios do not show the temperature falling 2000 -2006, I fail to see how this lack of warming substantiates the claim in the paper that the IPCC may have “underestimated the change” in temperature. In order to underestimate a change of zero, you’d have to predict cooling, and the IPCC has not done that.
And yes, that is far too short a period to make any hard conclusions, it’s only six years … which is what I said to start this off. They act like they are considering sixteen years of forecasts, but ten years of that is a hindcast.
According to the paper based on the archeological find that you linked to the ClimateAudit discussion of, “Current glacier retreat is unprecedented since at least that time” where “that time” refers to ~5000 years ago and the glacier in question is the one in the Swiss Alps at Schnidejoch.
According to the latest IPCC report, “Average Northern Hemisphere temperatures during the second half of the 20th century were very likely higher than during any other 50-year period in the last 500 years and likely the highest in at least the past 1,300 years.” I.e., it is very likely (estimated >90% probability) that the Northern hemisphere temperatures are unprecedented over the last 500 years and likely (estimated >66% probability) that the Northern hemisphere temperatures are unprecedented over at least the last 1300 years, which would in particular include the “Medieval Warm Period”.
Thanks for the detail. It sounds good…and I agree that checking that it gives the same result on both monthly and yearly data is a good idea.
Well, I think the basic point is that the climate models are, as they say, physics-based models. However, there are also some features that are parametrized. These parameters do not really allow detailed tuning to fit the historical global temperature record…but, clearly, there may be a tendency over time to choose the parametrizations and parameter values that give the basic features (such as a climate sensitivity) that do a reasonable job of giving agreement with the historical temperature record…although, as I have noted before, there are many other things that have to be gotten right, including some that are a lot more under direct control of the parameters.
For example, your Kiehl reference claimed to find evidence that that there is some correlation with models that have a lower total forcing (due to have a stronger negative aerosol forcing) having a higher climate sensitivity. Assuming Kiehl is correct, this could imply that those groups whose models have stronger aerosol forcings have tended to evolve toward having higher climate sensitivity so that the agreement with the 20th century temperature record is good. On the other hand, this correlation could also exist for some other reason. (For example, the aerosol forcing is complex enough that I don’t really know how it is calculated…It is not simply a radiative transfer problem as greenhouse gases basically are. So, I have also wondered if it was possible that there could be a physical reason why models with higher climate sensitivity would also have a stronger aerosol forcing. This is the sort of question that I would have to ask someone who understands better how the aerosol forcing is calculated to address.)
Well, both the analyses of sea level data that they point to were conducted since 2004. I don’t think going from the raw data to the final product is necessarily trivial. I don’t know exactly what data were available in 2000. I just tried looking in the TAR and I did find one figure showing the years 1993-1998 with the satellite altimeter data. That’s the only detailed data I saw for the 1990-2000 time period on my quick look through the relevant section. Do you think that they somehow tuned to this data? (If so, not very well.)
Rahmstorf et al. discuss this pretty clearly. They say that the CO2 follows the scenario almost exactly, although “the level of agreement is partly coincidental, a result of compensating errors in industrial emissions [based on the IS92a scenario (1)] and carbon sinks in the projections.” And, contrary to what you say about the scenario being low, they say “the concentration of other greenhouse gases has risen more slowly than assumed in the IPCC scenarios”.
I lost you…I interpret the gray region as showing the range of model results for the specific scenario. Yes, there is only one scenario, but there are a variety of model results for that scenario.
Well, I won’t defend this sort of citing except to say that, unfortunately, it is done a lot in science. In fact, I have almost never seen a paper cite where a certain result appears in the paper they are citing; if it is a whole book, then sometimes if they are feeling really generous, they might give you a chapter or even a page number but it is by no means the standard practice…particularly if they are citing several different parts of that work throughout their paper.
Hmmm…Here is another place where you may be at a disadvantage having not worked in another scientific field before this one. It is, in fact, not uncommon when one has a very complex, heavily CPU-intensive model and one is interested in extracting just one or a few features (such as the global average temperature) to do so by fitting the results of this model to a simpler model that one can then play around with more readily. In fact, my first work in science, summers during high school and college were doing just this sort of thing for the electronic band structure of materials.
To answer each of your questions in turn:
(1) Well, what you have to remember is that, while the original model is much more mechanistic, the MAGICC one is more phenomenological. So, all the stuff involving clouds is presumably subsumed in the resulting values for some of the parameters in MAGICC (presumably the temperature change from doubling one and perhaps others too).
(2) The point is that it is not clear a priori what values to choose for the parameters in MAGICC. So, the GCMs provide this input. Furthermore, MAGICC is presumably limited in what it can simulate, e.g., it presumably doesn’t give information about the temperature and precipitation and cloudiness changes at any given location on the globe. It probably just spits out a few things like the global temperature.
(3) No…not necessarily. For one thing, you are emulating only certain things. I.e., you get much less information out of MAGICC than you do out of the full GCMs (as I noted in the previous answer). For another, the purpose of GCMs is to provide a more physical model where the parameters are not made to be tuned to give, say, a certain result for the average temperature, but are used to represent physical processes. Most of them will presumably be constrained by these processes.
Well, while it may technically be true that some of the data was available by the time the model development was complete and they were run in 2000, I don’t think there is any evidence that they had been altered in order to agree with available data from 1990 to 2000. Of course, this may not be the ideal way of going about things if one is facing a very suspicious audience who is always looking for ways that they might be “cheating,” but it does not seem unreasonable to me, at least for a first look at how things are shaping up. I do agree that clearly more time is needed to draw firmer conclusions.
My friend, I regret to report you’re starting to sound like Rahmstorf et al. “Technically true”? My goodness. It is not “technically” true the data was available, it is simply true.
Nor am I accusing them of altering anything but the IPCC graph, and you can compare their graph to the IPCC graph and see that they’ve changed one thing.
Nor am I looking for “evidence” of anything. I’m just trying to evaluate their results, and I can’t do that. We still don’t know whether the models are forecasting for six years or sixteen years.
The question is simple. When does the in-sample data (the data they are tuned against) end, and the out-of-sample data start? Until we know that, we can’t evaluate a tuned model. They agree the models are tuned, just not to the “more recent” data. Until we know what years those “more recent” data are and for which datasets, we cannot evaluate their results.
jshore, this is ultra simple stuff, you must know this. You can’t test a tuned model on in-sample data, that is to say, on the data it’s been tuned against. It’s meaningless. If by “more recent” they mean “post 1998”, that’s very different than if they mean “post 1991”. And we don’t know which one it is.
It also affects our starting point. Let’s say the model is tuned against data up to say 1997. Anything model results for dates after that is a forecast, because the model is independent of the data. So if we are interested in how well the models can forecast (as is the focus of the Rahmstorf et al. paper), we start the clock running in 2007, and start looking at their 1998 and later forecasts. That’s what a “five year forecast” means, five years from when the model and data were independent. Otherwise, it’s a hindcast. It’s no use looking at what the model says about 1995, it’s in-sample.
Thanks as always, I’ll discuss some of your other points separately.
Actually, one of the advantages of having worked with and programmed computers in a variety of disciplines is that I have *both created and used *simpler models to run in lieu of processor hogs. So you can skip the insinuations about my lack of experience, you don’t have a clue what I have done in my life. What I am not aware of is another instance of using a model which is tuned to a model which is tuned to observations. If you have an example of that, I’d be interested. But I was actually attempting to point at something slightly different.
While it is wonderful that the full GCMs provide a wide range of results, the debate doesn’t rage around the question of future changes in the vorticity of the 500 mb level atmospheric winds. It rages around the future evolution of the global surface air temperature. That’s what the headlines and the studies and the stories are about. That’s what the models are tuned to reproduce. And the simple model is able to reproduce that quite well.
I was also considering the simpler model in terms of the Kiehl paper I cited earlier. Kiehl, a very prominent climate modeler, says:
I note that the parameters used in the simple MAGICC model to replicate the more complex models are exactly those things: forcing, sensitivity, and ocean heat uptake (with ocean heat uptake split into different parameters).
As Kiehl pointed out, forcing (∆Q) and climate sensitivity (∆T2x) are inversely related. And Kiehl says that because the models are tuned, if the forcings are large, the climate sensitivity is small.
This means that the GCMs obtain the values they have for the parameters (sensitivity and ocean heat uptake) either by tuning or by specification. (For example, the IPCC says that all the models can be simulated using the same ocean mixed layer depth, 60 metres).
And this brings me back to the implications of having a model tuned to a model tuned to the observations. Given that the critical parameters are either specified or tuned for, the simple model can be tuned against observations to give the critical parameters in exactly the same way the GCMs are tuned to give the critical parameters. Which is why I asked above about whether in that case we need the complex models to simulate the future evolution of the surface air temperature.
I would say that if you feel unqualified to hold and defend your own conclusions about the IPCC’s claims, then you probably have no business participating in debates such as this one.
But come on. We both know that you do feel qualified to hold and defend positions regarding the IPCC’s claims.
The problem is that on the this issue – the “unprecedented” argument, the pro-AGW position would appear to be untenable. It looks to me as though there is nothing tangible about current weather or climate that is unprecedented in any meaningful sense.
I suspect you know this too, but it just sticks in your craw to admit it.
Your discussion here of “in-sample data” vs. “out-of-sample” data is only strictly applicable to empirical models that are explicitly tuned to data. The climate models are physically-based models. Yes, they have some parameters…and, yes, some of the benchmarking done on the models presumably involves comparing to the global instrumental temperature record (although this is only a small part of the benchmarking). But, the distinction you are trying to make here is not so clear because you are applying the terms where they are not really so applicable.
Where did they say that? What they said is
You seem to have a habit of coming up with a questionable interpretation of what people have said and then running with that interpretation as if it was actually what they said. I think the interpretation of this sentence that they would agree is closer to what they meant is something more along the lines of: “The climate models are physics-based models that are not explicitly tuned to reproduce the temperature record. Furthermore, since the models are developed over several years and since the temperature record is sufficiently noisy that it is difficult to see any trend over a few years, to the extent that benchmarking against the global temperature record could affect some decisions regarding the values of parameters that describe various physical processes that are parametrized, it is unrealistic to expect that the last several years of the temperature record prior to the unified publication of the model results in TAR could have significantly affected the choice of parameter values in the models.”
Clearly, the only temperature data that one can rigorously prove had no effect whatsoever on any decisions in the development or modification of the models are the data that were obtained after their model development was complete. I imagine the exact date for this is different for different models.
Even if we assume the worst case that everything up to 2000 should be considered to be a pure “hindcast”, we are at the end of the day left with the fact that the instrumental temperature record is generally running at the high end of what the combination hindcast / forecast over the last 16 years shows.
There is currently insufficient data to make any statement about how the model is doing purely on data after 2000 that could have not possibly in any way influenced the model development, although this will become clearer over time. In the absence of having sufficient pure forecast data to compare to, I still think it is useful to do what they did…although I agree that a stronger test is to use only future data that they could not have possibly allowed them to influence any aspect of the model development in any way whatsoever.
jshore, thanks for your response. I asked because your statement was so different from what I usually hear about the models.
The usual claim is that the climate models do not successfully capture year-by-year variations, but over longer and longer periods of time they become more and more accurate, eventually being able to successfully forecast the climate for a century or more.
Your statement, on the other hand, was that “eventually there will be statistically-meaningful deviations from any particular model.”
Climate is often defined as average weather over 30 years. So while I would expect “statistically-meaningful deviations” in the short run, given the modeler’s claims I would expect that thirty years would be much more than enough for those deviations to even out. But that is not to be the case for the Stott model.
Since your statement seemed to be so much at odds with what the climate modelers are saying, I thought I should ask for clarification of your statement and its implications. Instead of a reply explaining your statement, you merely suggest that I don’t understand that climate models give a range of results …
Also, it doesn’t necessarily tell us anything to look at the “distribution of projections” if the individual models are showing significant deviations from reality. The models are all closely related, which means that if one of them is reading high, it is very possible that the rest are reading high. This may not change the distribution of the projections, just shift them all up or down, and the only thing that changes may be the average.
w.
PS - you seem to connect the range of the model results to the range of climate sensitivities. In fact, as Kiehl showed, the sensitivities are tuned for, and the range of sensitivities serves to reduce the range of model outputs, not increase them as you seem to imply above.
Thanks for your response, intention. Let me try to explain what I mean more clearly. I agree with the idea that the models are not meant to capture the year-to-year internal variability but more to capture the long term trends. (Both the model and real world show year-to-year variability and one can compare how certain characteristics of this “noise” compare between model and real world, but since the actual individual ups and downs are very sensitive to initial conditions, one cannot hope to have those compare well between model and data, modulo a recent paper by the Hadley group that argues that they think they can predict some of the short term variability over a period of a few years by initializing their model carefully to the current conditions.)
As for the models becoming more and more accurate over longer periods of time. Well, I would say in a general sense that is true. However, as you know the IPCC considers the likely range for equilibrium climate sensitivity to be between 2 and 4.5 C. So, clearly, if you run a model that has a 4.5 C climate sensitivity and it turns out that the actual climate sensitivity is 3 C, then (even if the world closely follows the emissions scenario assumed in the model) there were eventually be a statistically-meaningful divergence between the model and the real world. [It is also an oversimplification to characterize a climate model or the world by one parameter…its climate sensitivity, but to simplify the discussion, let’s suppose you can.] Alternately, if we still assume the climate sensitivity is 3 C in the real world but run a model that has a 2.7 C climate sensitivity then again you will eventually expect to see a statistically-meaningful divergence between this model and the real world, although since the difference is smaller it should take quite a bit longer.
My point is that any one model is unlikely to be the absolutely correct answer (in fact, if you are able to discriminate with high enough precision, it is certain not to be). They are all estimates. This is why the IPCC gives a range of future temperature values even for a given fixed emissions scenario.
Have I explained what I meant more clearly now?
Well, I think if you look at the models you will see that over long enough periods of time, there are significant differences between what they are predicting simply because of the range of climate sensitivities. Of course, it is true that they may all be reading too high and the actual climate will deviate out of the low end of the whole model envelope. On the other hand, it is also possible that the may all be reading too low and the actual climate will deviate out of the high end of the whole model envelope.
Well, that is not how I would put it. First of all, what Kiehl claims to have shown is a rough inverse correlation in the models between the climate sensitivity and the total historical forcing (with the main contributor to the uncertainty in the forcing being the aerosol forcing) such that the range of model results over the historical record is not as broad as it would be if the models had those sensitivities but all used the same total historical forcing. He never explicitly claims that the sensitivities are tuned for although he says that one interpretation of his results is that there has been a tendency for the modeling groups whose models estimate a larger historical forcing to adopt a lower sensitivity, quite possibly because of benchmarking with historical data. [It has long been pointed out that the 20th century temperature record does not provide very strong constraints on the climate sensitivity precisely because there is so much uncertainty in the aerosol forcing.]
Another possible interpretation (which I must admit I just made up and don’t know if it makes sense since I only vaguely understand what might go into computing the aerosol forcing) is that this wasn’t the reason for the correlation at all but merely that there actually tends to be a correlation whereby models with higher climate sensitivity also tend to produce larger (in magnitude…i.e., more negative) aerosol forcings. Given that part of the aerosol forcing is due to the indirect forcing resulting from their effects on clouds, this hypothesis does not seem too unreasonable to me, although again, it is admittedly based on ignorance of any real details of how aerosol forcing is calculated by the models. Presumably someone who is an expert in this could probably very quickly say whether this hypothesis makes any sense or not (although it may take more work to determine whether or not it is actually correct).
At any rate, back to your question: I guess you could phrase it the way you did, i.e., that the range of forcings in the different models would have produced a larger spread in the hindcast rise in temperatures if not for the range of climate sensitivities that accompanied them. However, as a practical matter of discussing what this means for the future, I would tend to phrase it the other way around. I.e., I would say that the range of climate sensitivities would have produced a larger spread in the hindcast rise in temperatures if not for the fact that the models with higher sensitivities also tended to have somewhat lower total forcings (because of more negative aerosol forcings). However, in the future, it is almost certain that aerosol forcing will continue to become less of a player because of the cumulative buildup of CO2 whereas aerosols get washed out of the atmosphere quickly and there a good reasons why countries try to limit them (i.e., they cause air pollution). I assume that this is reflected in the IPCC emissions scenarios. As aerosols become less important, the spread in the models presumably becomes somewhat larger.
Well, my flippant answer would be that I am tempted into participating by the fact that people like you who I feel are less qualified than I to hold and defend their own conclusions seem to still want to do so, so I feel no choice but to enter into the debate.
Well, I feel qualified to discuss what the IPCC says and to explain what they say and what I think it means. Do I feel qualified to hold independent positions? Well, maybe on a few issues where I feel like I do have the relevant expertise but not on many issues. The paleoclimate stuff is very much data driven and I have not looked in detail at much of the underlying data…so I feel my expertise is fairly limited.
Well, you yourself found a discussion of an archeological find from 2003 that actually led to a paper noting that at least in that one particular part of the Swiss Alps the find seems to imply that this particular pass is ice-free for the first time in at least 5000 years. The authors actually use “unprecedented” in the abstract. Admittedly, this is only one study and some other studies in the Alps have reached different conclusions (i.e., that implied greater local warmth during some past warm periods), although as the authors note theirs has the advantage of having a smaller lag time in the rapidly warming current climate; so the other studies may effectively be gauging the climate averaged over, say, the last half-century or so, rather than the current climate over say the last decade or so. This result is all the more surprising given that the evidence for a Medieval Warm Period and warmth around Roman times is stronger in that part of the world than most other parts of the world…and that it had long been accepted that it is quite possible that many particular parts of the world may have been warmer than it is now at some point in the last 2000 years but that the important feature of the current warmth is its unprecedented synchronisity throughout the world, meaning that the overall global (or northern hemispherical) warmth is unprecedented even if the warmth in certain individual regions is not.
I am a little confused about what the evidence is that makes you feel that these views of the IPCC and the NAS are “untenable”. I can’t really recall you presenting any.
Thanks for the psychoanalysis but I think you have misinterpreted here. I have been careful to state, as the IPCC and NAS have, that given the uncertainties associated with the temperature proxies and the reconstructions based on these proxies, the conclusion that the current warming is unprecedented (in the last 1000~1300 years) is not one that we are very certain about, but there does seem to be a considerable amount of evidence pointing in this direction.
In that case, there would be no need to respond to anyone on the merits. Just make 1 post in each global warming thread stating your view that non-climatologists such as me and yourself are not qualified to hold and defend opinions on AGW. That would at least be credible. Instead of trotting out the credentials argument when you are losing steam on other issues.
I have in other threads. Essentially, the claim that northern hemisphere temps are unprecedented in the last 1000 years is based on a number of proxy studies. i.e. looking at tree rings and other things. But when you study the data carefully, it emerges that these proxies generally miss most of the 20th century warming. So there is good reason to believe that these proxies would have missed a similar or greater warming 1000 years ago. So these proxy studies do not show that 20th century warmth is unprecedented over the last 1000 years.
Anyway, the source that you seem to put so much stock in admits that we don’t really know that current temperatures are unprecedented over the last 1000 years:
Note, even assuming your claim about divergence of the proxies from the instrumental record is true, that the quote that I gave you from the NAS is phrased totally in terms of proxy records…They don’t mention instrumental temperature. And, just in case you think that is still unclear, the study of Osburn and Briffa looked only at proxy records, without any reference to instrumental temperatures, and they concluded
And, this fact differs exactly how from my statement “I have been careful to state, as the IPCC and NAS have, that given the uncertainties associated with the temperature proxies and the reconstructions based on these proxies, the conclusion that the current warming is unprecedented (in the last 1000~1300 years) is not one that we are very certain about, but there does seem to be a considerable amount of evidence pointing in this direction”?
Your statement exaggerates the level of confidence. You state that the conclusion is “not one that we are very certain about,” which suggests the possibility that we are fairly certain. Which we are not.
But anyway, there’s no need for a sematic debate. It would appear that at a minimum, there is no basis for reasonable confidence that anything tangible about today’s weather or climate is unprecedented in any meaningful sense.
jshore, you have a knack for picking studies which either rely on Michael Mann’s discredited work, or on unarchived or “gray” datasets, or on bristlecone pines that the NAS said not to use in proxies, or on proxies that have been hnad-picked without ex-ante selection rules, or (as in the current case), all of the above:
This habit of yours, of relying on studies which you have not personally investigated, is very foolish in the field of climate science. I fear that the days when we could conclude that if an article was peer-reviewed that it was at least reasonable science are long dead. At first I found this habit of yours amusing, and I just assumed that you didn’t know better. But we’ve been discussing this for a while. By now, you should know that many of these millennial reconstructions are just the last study recycled.
The Osborne/Briffa paper is an excellent example of this. Gerd Burger, a lead author and contributor to the IPCC TAR, commented on the O/B paper:
He goes on to say that their
Burger repeated the analysis using the proper methods, and concluded that the
However, this only scratches the surface of the problems with the paper. There are two stripbark (bristlecone) pine series in there, out of only 14 series. The Yamal series has been substituted for the Polar Ural series, which showed a very warm MWP. The Dunde series, which Thompson has repeatedly refused to archive, is included covertly as a part of the “China Composite”. The Bona Churchill ∂O18 results have been updated, but the update has not been used. Other unarchived records are also included in the “China Composite”. The list is long, bro’, long.
The proxy selection criteria have been set very low … otherwise the Mann PC1 bristlecone proxy would have been excluded. The author’s comment on this choice?
Perhaps they didn’t get the memo that the reason for proxy selection criteria is to identify bad proxies … and that the ones that get excluded are by definition bad proxies, because they don’t correlate with temperature or for some other reason.
You claim to be a scientist, jshore. Act like one. Next time, before citing the 16th or 23rd recycling of bad proxies, bad math, and bristlecone pines, do your damn homework. You can’t depend on the reviewers or the climate scientists to do it for you.
Your constant pointing to studies dominated by bristlecones and unarchived, hand picked proxies reveals an intellectual laziness, an unwillingness to go the hard yards and do the research required, that I find surprising given your attention to detail in your posts. Why do I have to go through each of your bogus bristlecone studies and point out to you, over and over, the same unarchived proxies, the substitutions of a proxy without MWP warmth for one showing a high MWP, the bad math, and most of all, the same old bristlecones.
There are a couple of interesting analyses of the O/B paper and the Burger comment here, here. Read them, and then come back and report to us just how wonderful, how independent, and how thoroughly researched and substantiated the O/B paper really is.
Or, you could say “No, I don’t want to analyze the studies, I don’t choose to do the hard work of actually researching the individual proxies, I don’t want to find out why the stripbark pines were rejected, I don’t care to delve through the morass of bad statistics. I believe that all of those climate scientists are noble souls without any agenda, so I’ll take their work at face value.”
Which is fine … but if you are unwilling to do the hard work, why should we pay any more attention to your claims?
Well, I admit that I was a bit careless in what I quoted there since, as your bolding points out, their abstract does mention the instrumental temperature record. A better thing to quote is their conclusion where they make it clearer what they conclude from the proxies alone and what they conclude by considering the proxies in light of the recent instrumental temperature record:
Well, I am not going to quibble about semantics. The IPCC uses wording that it clearly defines the estimated (subjective) probabilities of the statement being correct (namely, they say that it is likely that the current warmth is unprecedented in the last 1300 years, where “likely” means that they estimate that this statement has at least a 66% chance of being correct…and presumably less than 90% because they chose not to use “very likely”). The NAS panel felt that given the difficulties of quantifying the uncertainties, they could not make a definite estimate of the probability of the late twentieth century warmth being unprecedented over the last 1000 years but noted that what they can say is that:
The difficulty of analyzing these proxy temperature records, along with the fact that any evidence one way or the other on whether the temperatures are currently unprecedented is only circumstantial evidence anyway (i.e., it is possible that they could be unprecedented but the cause is still not AGW or it is possible that they could not be unprecedented but the cause of the current warming is still in fact AGW), makes this line of evidence among the least compelling lines of evidence in support of AGW (at least for me). And, frankly, I think that it has attracted so much attention from the “skeptic” community precisely because it is a weak point where they can easily raise lots of questions and point out lots of uncertainties…and then often deceivingly imply that if this one line of evidence is wrong then the case for AGW collapses.