Global warming has stopped?

As you know, the glacial - deglacial transitions are understood to be triggered by orbital oscillations but magnified (and probably the hemispheres synchronized) by the CO2, which does lag behind the start of the transition (as best can be determined from the data…and as theory suggests probably should anyway) but is changing during most of the temperature transition.

But, more importantly, the climate sensitivity is expected to depend basically on the radiative forcing independent of the mechanism of that forcing. In fact, it is often expressed not as a change in temperature per doubling of CO2 but instead more generally as a change in temperature per change in W/m2 of radiative forcing. And the radiative forcing for a given change in CO2 concentration is known to very good accuracy (~3.8 W/m2 for a doubling of CO2 levels).

So, what the estimates of climate sensitivity do is look at the estimated difference in forcing between the last glacial maximum and now (which is due mostly to the changes in albedo due to the glacial extent, secondarily to the changes in CO2 and methane, and then with a small contribution from changes in dust levels in the atmosphere) and compare this to the estimated changes in temperature that occurred.

By the way, Hansen, who is at least one of the people responsible for this estimate, has pointed out more recently that this is likely to actually underestimate the equilibrium climate sensitivity as we define it today since, in that calculation, the change in albedo due to the change in ice extent is considered to be a forcing…whereas when we do the computations today we would consider it to be a feedback. (Although there are some counterarguments to this that I can’t quite remember…but may basically boil down to the point that the change in ice extent between now and a warmer world will be a pretty small effect compared to what it was between the ice age and now.)

Sheesh. The IPCC statement is based mostly on models.

We are going around in circles here. In fact, it is based in large part upon the constraints (or, more precisely, probability distributions on ECS values) obtained from studying the observational data. Yes, some modeling is involved in the interpretation of the climate sensitivity in most of these studies but that is very different from using the models to predict the sensitivity.

And, by the way, just for the heck of it, I looked up that Lindzen and Giannitsis paper (see here for a PDF version). It turns out that they also use a MODEL to glean the climate sensitivity out from the observational data. Not only that but they do this on a rather questionable observation with arguments based at least in part on some things regarding the satellite vs surface record that are now understood to be wrong…and their conclusion for the ECS is not quantified very well. (All they say is “While one wouldn’t want to use such results for a precise determination of climate sensitivity, it is clear that for either choice of nu, best agreement with the observations is obtained for low values of gain (characteristically less than unity).”

I must admit, I admire the balls of the IPCC in redefining the meanings of probability. For hundreds of years, scientists have said the definition of probabilities in scientific results look like this:

99%: Very Likely

95%: Likely

Less than 95%: STATISTICALLY AND SCIENTIFICALLY MEANINGLESS

Now, the IPCC has come along and redefined these as:

99%: Virtually Certain

95%: (no term)

90%: Very Likely

66%: Likely

Like I said, I salute their chutzpah … but I despise their lack of scientific integrity in the change. The idea that a scientific conclusion with (as best as we can determine) a one chance in three of being wrong is “Likely” to represent scientific truth is a corruption of the scientific method.

w.

Modern day construction of massive structures in earthquake zones and such are also based on models.

If a model can be shown to agree with what is observed in the real world to within certain margins of error, why wouldn’t you trust it to within those margins of error? Not to do so would be like betting that you’ll be the guy who smokes two packs of cigarettes a day and lives to a hundred.

Interesting if true. Do you have a cite?

There is nothing inherently wrong with using simulations for analysis or prediction. However, there are potential pitfalls. If the simulation is heavily parametrized; and you don’t understand some important factor or factors, it’s easy to end up with a model that fits the available data but is wrong.

It’s surprisingly easy to demonstrate that this has happened in the world of climate modeling. As intention has pointed out, there are numerous climate models with very different sensitivities, all of which fit the available data. Therefore, at least some of these models MUST be dead wrong.

Turning to your construction analogy, let’s suppose that there’s never been a major earthquake in recorded history. All we have to examine are ruins of ancient structures. We don’t know very well how those structures were built or what kind of earthquakes they went through. Let’s suppose further that engineers frankly admit that they don’t understand very well how concrete and steel interact under great stress. Would you have the same kind of faith in simulations?

Well, your argument is somewhat circular. Please remember the question I asked, which was “Aside from models, what is the evidence?”

Would you mind explaining the difference?

It looks to me as though they simply cited to an earlier paper from 1998 that made use of a simulation.

In the case where you rely completely on the model, you are just running the model with what you believe are the best set of parameters and using it to predict what the equilibrium climate sensitivity is. (Or, alternatively, as they did in one of the climateprediction.net experiments, you can vary the parameters over plausible ranges and look at the distribution of climate sensitivities that you get.)

In the other case, you are not using the model to predict what the climate sensitivity is. Rather you are using the observational data to do so but are using the model to convert between the observational data and the climate sensitivity. I.e., by varying the climate sensitivity in the model, you are seeing what values give the best agreement between the model and the observational data.

Yeah…With the words, “This effect is trivially calculating using a model described in [Lindzen and Giannitsis, 1998] wherein account is taken of land-sea coupling, and an ocean mixedlayer above a finite thermocline. Parameters are tuned to recplicate the annual cycle over both land and sea; this primarily determines the depth of the mixed layer and the land-sea coupling.” In other words, they are using a model to see which values of the equilibrium climate sensitivity give the best fit to the observational data in the same way that I discussed above. (As I noted, they are also doing so for an observation whose interpretation is questionable to begin with and in a way that is not very quantitative.)

First of all, the models don’t fit the data exactly…they do to within some reasonably large error bars. Second of all, what it means is only that the climate sensitivity is not strongly constrained especially by looking only at this one piece of observational data (the 20th century global temperature record)…because of the uncertainty in the aerosol forcing. The models are only “dead wrong” in the sense that we don’t know the climate sensitivity to high accuracy…i.e., the IPCC says it is likely in the range of 2 to 4.5 C. That’s called “uncertainty” and it is something that we deal with all the time in science.

Fine. Let’s turn to another analogy: We’ve never observed one species evolve into another in recorded history. Sure, we’ve seen some “microevolution” where a bacteria, e.g., becomes resistant to antibiotic drug. But, that is a very far cry from “macro-evolution” where the claim is that a whale becomes a sheep or, over even longer time, a one-celled organism becomes a human. So, should we believe the science of evolution?

What should the criterion be for trusting the scientists to be the best judges of the science and relying on the respected scientific organizations that are chartered for this purpose like the National Academy of Sciences in the U.S. vs instead deciding that we are going to vaguely look at the science and decide for ourselves whether or not we are qualified to do so? Should it be because we don’t like the conclusions that the scientific community has come to?

It seems to me that either approach is suceptible to the problem I outlined earlier: That if you don’t understand an important factor, you are likely to get bad results.

I would want to see the actual paper before evaluating it, but anyway, I’m certainly not claiming that simulations are inherently wrong. See my response to Sage Rat above.

I was talking about individual models, not the IPCC’s aggregation (which is itself questionnable for other reasons). But let me ask you this: If it turns out that climate sensitivity is 1.4, would you agree that the IPCC was dead wrong?

Not necessarily. Certainly if we were being asked to make major changes to public policy on the assumption that the theory of evolution is correct, the entire theory should be looked at with a good degree of care and skepticism.

Personally, I’m pretty much satisfied that evolutionary theory is correct. Mainly because it’s a simple theory that explains a lot with little or no ad hoc assumptions and the competing theories aren’t very plausible. Also, it doesn’t carry the red flags of a scam.

Are you talking about descriptive conclusions or prescriptive conclusions?

intention, first thank you for participating in these discussions. Like many others on this board, I have come away from these global warming threads appreciative of your efforts.

About this,

I, a scientist, have never heard this before. I was not taught this in grad school, nor have I learned it in the years since I left even though I work closely with a team of 15 other scientists with different areas of expertise and educational backgrounds. Do you have a cite?

'nother lurking scientist (geology) checking in to say WTF? and CITE!

<95% probability being statistically/scientifically meaningless would certainly come as a surprise to Gauss, Poisson, or anyone who’s ever used a normal distribution curve in their work. That’s probably(heh) every field of science, just so you know.

I believe what intention is getting at is the language of statistical hypothesis testing and inference.

For example, suppose you are studying a new drug for heart disease on a group of men aged between 45 and 60 who have heart problems. Of the 20 men who take your drug, 2 have heart attacks and die. Of the 20 men who took placebos, 4 have heart attacks and die.

Can you conclude that the new drug cuts the risk of heart disease by 50%? Not necessarily, since it’s possible to get those results just by random chance even if the new drug does nothing at all.

One needs to evaluate the likelihood that your results could have arisen just by chance alone (if the drug is ineffective). In the parlance of statistical testing, the possibility that the drug is ineffective is known as the “null hypothesis” or Ho.

There are various statistical techniques to do this. Suppose that you use these techniques and determine that there was a 1% chance that you would have gotten these results given that the null hypothesis is true and a 99% chance you would NOT have gotten these results if the null hypothesis were true. That would be considered a reasonably significant result.

Now, suppose that using these techniques, the probabilities were 70/30 instead of 99/1. Normally that would be considered inconclusive.

I’m not a statistician, but I did take a lot of classes in this area as an undergraduate. One professor in particular told us that 95/5 significance was the basic standard in hypothesis testing. He told us that significance at the 90/10 level was on the margins and was often used by people who wanted to find support for a shaky hypothesis.

Is any of this relevant to the IPCC’s assesment’s of probability? Possibly. Suppose that the IPCC is saying that there is a 90% chance that temperature increases in the 20th century were mainly due to increased levels of CO2 and a 10% chance due it was due to natural causes. Arguably by analogy with statistical hypothesis testing, this would NOT be considered to be a very significant result.

At the same time, I’m not sure that it’s analogous. Where does the IPCC’s 90% figure come from anyway? I tend to doubt it’s the result of any sound statistical analysis. More likely it’s just the average of numbers that a few “experts” pulled out of their collective *ss.

Thanks for the reply Brazil84.

Well, I might quibble about whether the adjective “dead” is really necessary…but, yes, if the climate sensitivity falls into a regime that the IPCC considers very unlikely then I would consider them to have been wrong.

Well, nothing is a sure bet in science. And, indeed, I would say it could still be possible that you would get wrong results. However, I think it would be significantly more difficult to get wrong results when using the model in this manner.

Here is the PDF file of the 1998 paper.

Well, I think to a religious fundamentalist, deciding to teach only evolution and not the “alternatives” in the science classroom is considered a major public policy issue. I know to you or me, that may seem sort of silly but people get pretty worked up about their children being taught that we are descended from monkeys (or, more correctly, that monkeys and us are descended from a common primate ancestor) when they want their children to absorb the ideas and values associated with having a God having created us in his image.

My point in bringing it up is that there are lots of similarities in the way that evolution is challenged and the way that AGW is challenged. Mind you, I am not claiming that the analogy is perfect and I am willing to admit that there still exists more scientific uncertainty, e.g., in regards to the climate sensitivity, than there is about the theory of evolution (at least as a whole…although there are certainly aspects of it where there is still considerable scientific debate). However, there are a lot of the same sorts of arguments about how it can be tested and whether it is falsifiable…and the same sort of focus on the uncertainties and things that are still unknown to the exclusion of the things that are known. And, of course, there is the same sort of division between where the scientific societies and the peer-reviewed literature stands and where skeptics stand. (I am not claiming to exactly the same quantitative degree…but there is a broad qualitative similarity.)

Well, I think the two get mixed together. I.e., it seems to be those who have a strong dislike of the likely prescriptive conclusions who are most likely to contest the science, which is why you find organizations like the Cato Institute, the George C. Marshall Institute, the Heartland Institute, etc. playing such a major role in not only the debate about actions but also the debate about the science.

I’m wondering, since CO2 is not the only GHG in the atmosphere, shouldn’t this calculation be made based on total incremental increase, rather than just the CO2 increase? For example (made up numbers), if CO2 were 50% of the total GHG and I quadruple the CO2, then the total GHG has only increased by 2X and thus the expected temp rise would be ~1.58X, not 2X. What is the proper way to account for the fact that CO2 is only one GHG component?