Well, I don’t think it is true that the cloud cover is “pegged” to 59%. I think it is free to vary in the model. The point about the 59% is that this is what the cloud cover comes out to be in the model for the current state with the parameters set the way that they are in the cloud parametrization. However, as greenhouse gas forcings are added to the model, I believe that the cloud cover is free to vary.
Of course, it is a legitimate question whether the way that the clouds change with the added greenhouse gas forcings is realistic. This issue of the handling of clouds is indeed the largest remaining source of uncertainty in climate model predictions at the moment. However, as ,for example, the climateprediction.net experiment showed, it was not possible to vary the parameters that they looked at over the ranges that they looked at in a way that produced a climate sensitivity below about 2. Obviously though, improvement of the handling of clouds is a big priority in improving climate models…and that also necessitates good observations of clouds.
Pretty much for the same reasons that I explained in my previous post. The time variation of the 20th century temperatures is mainly controlled by the time variation in the forcings. The parameters in the model will change how the temperature responds somewhat…especially to the extent that they change the climate sensitivity but if the climate sensitivity varies by, say, ±15%, it is not going to make a very large difference in the quality of the fit to the 20th century temperatures, especially given the observational uncertainties.
As for the model being overtuned, if that were the case then they would presumably be able to hit the known cloud coverage pretty much dead-on. There is a lot of observational data out there…way more data than there are parameters in the model…so it is not really possible to overtune the model.
As always, a pleasure, jshore. You likely work with computer models that have been proven in the real world, and so you have faith in them. So let me start by saying that unlike men, not all models are created equal.
Further comments follow.
You were doing well until you mentioned being statistically different from ~0.2C/decade. Since the trend is about zero for the last six years, it is hard to show it is statistically different from zero … but it is easier to show it is different from ~0.2. It turns out that the trend from 2000 to the present is statistically different from 0.2, and also not statistically different from zero.
The length of time needed to exhibit statistical significance (p < 0.05) depends on what you want to show (null hypothesis), what the recent trend is, and the degree of autocorrelation.
Short answer is you are right, it is hard to get significance out of decadal or shorter climate datasets.
I guess you didn’t see Jeffrey T. Kiehl, 2007. Twentieth century climate model response and climate Sensitivity . GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L22710, doi:10.1029/2007GL031383, 2007.
The interesting quote is:
The short version is that total forcing and sensitivity have an inverse relationship. The inverse correlation is strong, -0.7 (p=.002). Much of the remaining variation is due to the choice of ocean heat uptake efficiency.
It makes sense, really. If you put in way too much forcing over the 20th century, you’ll have to turn down the sensitivity to match the historical record. And conversely, if you put in very little forcing, you’ll need to turn up the sensitivity to match the temperature history.
Next, Kiehl asks what causes the large variation in the total forcing. It turns out that aerosols have a strong correlation with total forcing, 0.8 (p=.01).
The aerosols are the largest player.
Conclusion? The sensitivity is not set on any first principles basis. When you tune to match the historical record, the ocean efficiency can’t change much, and the aerosol forcings are both parameterized and externally selected. Something’s got to give so that the model can match reality. That something is climate sensitivity.
But hey, no need to take my word for it. Kiehl is the Kiehl of the Kiehl/Trenberth global energy budget, and is a co-author with Hansen, Ramanathan, and lots of heavyweights. He is the author or co-author on a host of scientific papers on climate models and their intricacies. He says that climate sensitivity as measured by the models is mostly determined by the choice of anthropogenic forcing, with larger forcings leading to lower sensitivities.
So, give it a good read … and having done so, you might wish to retract some of the more intemperate and unsustainable claims in your last post. Or heck, I don’t know, you may want to just steamroller over it and ignore it, your choice.
intention seemed to be saying that cloud cover was set at a certain level. I don’t know, since I’m not familiar with the model. But let’s suppose it’s free to vary. Again, if there is a relationship between temperature and cloud cover; and cloud cover ends up being way off, that would seem to be a big problem.
Again, this seems like a big problem to me. Let’s suppose that there are 6 issues that are strongly related to temperature, and clouds represent 1 of those 6 issues. If we understand 5 out of 6, does that mean we can be 85% certain of our models? I tend to doubt it.
It depends on what result they seek in their tuning. I gather that models are evaluated based on how well they match temperature.
Thanks for your response. I think I have worked with lots of models over the years of varying degrees of “provenness”…and also varying degrees of being mechanistically-based vs. being phenomenological or even empirical. It’s not that I have a lot of faith in them but more that I have an understanding of how you can still get results with some reasonable degree of confidence even when your model makes some pretty dramatic approximations or has some known deficiencies.
Well, like you said, this depends exactly on what you assume regarding autocorrelation. With reasonable assumptions on this, both no trend and 0.2 are apparently included. Here is a very nice post on this subject.
Agreed.
I’ll try to read that paper when I have time. However, there is nothing in it what you describe from it that surprises me. It has often been noted that of the various observational ways to constrain climate sensitivity, the 20th century temperature record provides one of the weakest (if not the weakest) constraints, mainly because the uncertainty regarding aerosols but also because of uncertainty regarding the temperature trend due to the manmade forcings since it is possible that some of the trend is due to natural forcings and internal variability (or, conversely, that the trend would have been negative if not for manmade causes). I in fact noted this point about the 20th century not constraining the climate sensitivity very well on another forum just last night (and you can find the point made on the RealClimate website, for example…in fact, this piece of theirs from July 2005 basically says it and see also the references therein to earlier posts).
The tighter constraints seem to come from other observational data (e.g., paleoclimate or the climate response to the Mt. Pinatubo eruption) or, best yet, from combining all the different observationally-determined constraints together.
I think you are reading too much in to his wording. If you go back and look at the paper that he linked to, I think you’ll find that cloud cover is free to vary. I.e., cloud cover itself is not a parameter that they set.
Well, I have explained as best I could why it is probably not as big a problem as it might seem but, ultimately, it depends what you mean by big problem. If you mean it is a big problem for being able to make the statement, “the climate sensitivity is between 2.75°C and 2.85°C” then, yes, absolutely it is a big problem. However, if you mean it is a problem for being able (in combination with the observation data) to make the statement, the climate sensitivity “is likely to be in the range 2°C to 4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C” (which is the IPCC statement), then not so much.
In other words, the current uncertainties involving clouds and stuff do limit our ability to know the climate sensitivity with the certainty that we might like. No doubt about that. However, the relevant question is whether they limit our abilities more than the IPCC already notes that they do and I don’t see any compelling evidence to believe that they do.
I am not sure how you gather that at all since that is not what they describe and, furthermore, as I noted, is not really something that the parameters are designed to tune to very effectively anyway since that is in large part determined by the time-dependence of the forcings. If I wanted to get a great fit to the historical temperature record, then I could easily create a purely phenomenological model with, say, 4 parameters that would do a bang-up job fitting the temperature record…while having no predictive value of course. (You know, that statement about fitting an elephant and wagging its tail.) However, I could also present you with a model with a thousand or more parameters that you could not for the life of you tune to fit the historical temperature record. Climate models are not particularly well designed to fit historical temperature records because that is not their purpose. Their purpose is to physically model the climate…and, particular, to do so in a way that allows one to predict the effect of a change in radiative forcing such as that due to increases in greenhouse gases.
If there is is some significant negative feedback involving clouds (as some scientists have hypothesized), then those sensitivity estimates are way off. What is the likelihood that such a negative feedback exists? The answer is that nobody knows.
I saw the IPCC chart comparing model runs to the temperature record with and without man-made elements. With man-made forcings, the fit was beautiful. Without man-made forcings, the models switched gears from warming to cooling around 1950, even though there was no volcano at that time. To me, that strongly suggests the models have been tuned to fit the temperature record.
jshore, I appreciate your taking the time. I’ll answer your statements in parts, as the original post was getting long. Responding to brasil84 you said:
The models are assuredly tuned to match the 20th century global historical temperature record. Not only that, they make no bones about it. From Kiehl, as cited above:
and
Otherwise, I’d have to believe that all those 11 models were all able to reproduce the 20th century climate, and yet their sensitivity varies by a factor of 3, without their ever having been introduced to the 20th century dataset, and that just surpasses belief.
You say “However, I could also present you with a model with a thousand or more parameters that you could not for the life of you tune to fit the historical temperature record.”, which is true … but if I built the model, I guarantee I could tune at will, as could you.
Let me invite you to do a thought experiment. Imagine a planet just like the Earth, but where the sun is much weaker, just warm enough to keep the oceans unfrozen. How much cloud cover would it have?
Well, not much, because the temperature is so low there’s hardly any evaporation. The air would be dry, which means those cloud-free skies like we see in the dry desert air.
Now, suppose we turn up the heat on the sun just one notch. The earth starts to warm. A bit more water evaporates. A bit more clouds form. Immediately, the amount of incoming sunlight is cut down. But the earth is still warming, and more clouds form, and the process continues until equilibrium is reached. It occurs where the line of decreasing sunlight (from increasing clouds) meets the line of increasing temperature.
Note that the total temperature change will be smaller than would be predicted based on the change in solar output alone. The cloud feedback sets the final temperature, not the change in solar input.
Now, in the thought experiment, let’s turn the sun up a lot, let’s turn it up to our present heat. The world gets more warm and wet and tropical. From almost no cover the clouds increase to cover 70% of the globe. This jacks the albedo up to 30%, and cuts the sunlight down from 340 W/m2 outside the atmosphere to the 235 that is actually absorbed by the system.
And at that point, the warming stops. The lines cross, the temperature balances the cloud cover. The planet will not heat further despite the fact that significant additional energy is available.
Note that if some occurrence causes the temperature of the earth to drop (e.g. volcano), the cloud cover will drop as well and will hasten the return to equilibrium.
Remember that albedo feedback is not like any other feedback. The albedo is the throttle on the incoming energy, it is the gas pedal of the planetary heat engine. Clouds are the major determinant of the variable part of the albedo. Yes, changes in ice and snow play a part, but consider the Arctic winter, with all that polar ice. Does it change the albedo much? No, because mostly, the ice is where the sun isn’t …
In addition, snow and ice are up towards the poles. When you have low sun angles, the sun skips off most any surface. You can see it with glass, when you get past a certain angle you can no longer see through it, everything is reflected. So the albedo at low sun angles is high in general, with or without snow or ice.
So the cloud cover sets the amount of fuel entering the planetary heat engine. Where I live, I can see it acting as a limit on the temperature rise every day. In good tropical Pacific fashion, the morning is clear. But as the day warms, clouds start springing up. They reflect the sunlight and bring shade. If the sun is hot enough, thunderstorms form and bring shade and cooling rain. Both limit the heat buildup that would otherwise happen without this feedback.
In addition to reflecting sunlight back into space, thunderstorms are amazingly efficient at moving heat aloft. They function as a heat pipe, pumping moisture laden air aloft at vertical speeds up to a kilometre per minute. At the top of the pipe, the air and the moisture (ice) can radiate energy into space with little greenhouse effect. And of course, the more heat, the more thunderstorms.
And that’s why having 22 adjustable cloud parameters as I listed above still doesn’t do it. The temperature-cloud feedback is on the scale of minutes and hours, not months and years. Clouds are widely distributed, but in a blotchy, scattered manner. Each cloud both responds to and alters its own immediate environment. Clouds are born, have a life cycle, and die. There are many types of clouds, each with it’s own distinctive response to sunlight and longwave radiation. They are arranged in small and large scale patterns. And somewhere in all of that, no one knows where, the total albedo responds by increasing with temperature and limiting temperature rise.
All the best,
w.
PS – as a measure of how little we really know about clouds, it has been know for a long time that individual cloud droplets form around “cloud nuclei”, tiny bits of something solid. Dust and airborne sea salt are known to act as cloud nuclei. But a recent scientific paper I can’t lay my hands on right now showed that more than half of the cloud droplets they studied were formed around … wait for it …
bacteria.
Bacteria? Go figure.
And you think we can model this breathtakingly complex system that we are so very far from understanding? Yes, you are right, you can use approximations of all types in models. I have done so many times. But to do so, you have to first have a very good understanding of the system you are modeling. And understanding the climate system, to date, we are working on it but are far from “settling the science”. Bacteria as cloud nuclei? What happens if they all die off?
PPS - I am aware that as water vapor increases, as well as more clouds, you get a larger greenhouse effect. But the albedo is the throttle, and a tiny, 1% change in the albedo has the same effect as a doubling (or halving) of CO2.
Here is a Wikipedia piece discussing the modern measurements of CO2. The short answer to your question is “No” I don’t think they are important to the discussion. I don’t think there are any serious questions about the rise in CO2 levels and what it is attributable to. That doesn’t mean that you can’t find some weird things on the web questioning this…but I don’t think they are considered to be scientifically-serious issues. I also don’t think there is any serious question about what the radiative forcing due to a given increase in greenhouse gas emissions is.
The big issues that are more legitimately discussed are the ones that we are focussing on here…i.e., how sensitive is the climate to a given forcing produced by the rise in greenhouse gases. (I suppose there are other legitimate questions involving the future…e.g., future emissions scenarios and future behavior of the sinks that are currently absorbing about half of the CO2 we have been putting into the atmosphere.)
Thanks for your response too. I will also deal with your post in parts since I only have limited time here on my lunch hour.
Simple thought experiments are always good but one has to be careful that one is thinking about things correctly. I have some difficulties with your thought experiment that I will summarize with three questions:
(1) Are clouds needed to explain why the earth doesn’t just keep heating up from solar radiation? Implicit in your description seems to be the idea that for the sun’s current brightness, the earth would get too hot were it not for clouds. However, in fact, the conundrum that needs to be explained relative to a graybody earth with no atmosphere (or a boring one that didn’t have greenhouse gases in it and such) is why the earth is as warm as it is. I.e., the simple radiative physics calculation shows that with the earth’s present albedo (reflectance) of ~30%, the earth’s average surface temperature should be about 33 C cooler than it is, not warmer! Admittedly though, some of this albedo is due to clouds, so you might argue this is not a fair comparison. Nonetheless, the albedo is not all due to clouds (my vague guess might be that about half of it is but I really don’t know)…But, as an extreme case, let’s suppose that it all is and calculate what the temperature should be for an earth with no albedo. Alas, we still find it to be about 9 C cooler than it actually is. [By the way, these calculations are made under the assumption that the earth’s temperature is uniform. However, it is not hard to demonstrate that the effect of non-uniform temperature is to make this discrepancy larger, i.e., a gray- or blackbody at a non-uniform temperature will actually have a lower average temperature than one at a uniform temperature due to the T^4 dependence of the radiative energy on the temperature T.] So, there is no mystery as to why the earth isn’t warmer…The mystery is instead why it isn’t cooler…and the answer is, of course, all the other aspects of the atmosphere, in particular, greenhouse gases and clouds. We know, of course, that the greenhouse gases will produce warming. For clouds, it would be necessary to understand a lot more before we could say what their net effect would be.
(2) Is cooling the only effect of clouds? You discuss clouds as if their main effect is to cause cooling. In fact, the main effect of clouds is to lower the day-night temperature range, i.e., to decrease daytime temperatures but to increase nighttime temperatures (as is apparent to anyone who has noticed how rapidly temperatures tend to drop on clear nights relative to cloudy nights). Clouds cause both cooling and warming effects since they reflect back infrared radiation from the earth as well as reflecting solar radiation.
(3) Is the effect of warming simply to produce more clouds? You seem to assume that warming produces more clouds due to the greater evaporation. However, it is in fact more complicated since warmer air can also hold more water vapor before it has to condense out to form clouds. The rough estimate that comes out of climate models is that relative humidity remains about constant. (In fact, the latest data and more detailed analysis of what comes out of climate models suggests that there is expected to be a small drop in relative humidity in the upper troposphere.) I’m not sure what this directly implies about whether there will be more or less clouds…although the most naive interpretation from the constant relative humidity would be to say there will be about the same amount.
I imagine that someone more up on cloud physics and its effects on climate would have more certain statements to make about the expected net effects in (2) and (3). But, my point of these 3 questions is simply that the idea that clouds must provide some significant negative feedback on the climate sensitivity, as your thought experiment suggested, does not follow once you think about things more closely.
I disagree. I would say that the evidence on a whole suggests that such a significant negative feedback does not exist. That doesn’t mean that we have ruled it out completely with 100% assurance (hence the probabilistic statements about climate sensitivity) but that the observational evidence as a whole, as well as the evidence from modeling, does not support the existence of such a feedback. If such a feedback did exist, it would certainly require not only a revised view of the present but also of the past in order to explain the considerable changes that are seen in the paleoclimate record.
Well, first of all, I would modify your statement about the fit with manmade forcings being beautiful to note that there are significant error bars on that due to the uncertainty in the overall forcings due to the uncertainties in aerosol forcings, and the related uncertainty in the climate sensitivity.
As for the second part, in the thread that you made the observation about 1950, it was pointed out to you that you were jumping to conclusions based on essentially no evidence. I.e., it wasn’t even clear the start of the drop in 1950 in the modeling was due to anything more than noise (and/or the finite interval at which the results of the simulations were averaged to produce the experimental uncertainty band that was shown)…and we also weren’t sure what the time dependence of the natural forcings were (although it seemed reasonable to assume that there wasn’t a very signficant negative natural forcing before the volcano erupted…On the other hand there wasn’t a very significant temperature drop before that time either). And, you had absolutely no explanation of how the models could have been tuned to fit the temperature record in the manner that you are suggesting. Which of the known parameters in the climate models would do this? It was all conspiratorial speculation on your part.
I don’t see how that modifies my statement. The fit is still beautiful. Anyway, it doesn’t seem you dispute intention’s argument from a few posts back:
Well, just a nitpick first: If you look at his figures, the actual variation in sensitivity is a factor of 2.4 (and a little less than 2 if you eliminate the one that is most of an outlier).
But, first off, let me summarize where we agree: Because of the fact that the aerosol forcing is not known to very good accuracy, the 20th century data does not provide a very strong constraint on the climate sensitivity. This does give the modelers some potential for tuning…and it is the argument of the Kiehl paper that some amount of tuning has occurred to the extent that there is a very rough inverse correlation between the climate sensitivity and the total anthropogenic forcing (due to the higher climate sensitivity models tending to have more negative aerosol forcings) such that the agreement with the 20th century data is better than it would be if all the models with different climate sensitivities all used the same radiative forcing.
However, what we have previously argued about and still remain apart on is the claim that the reason the models are unable to get good agreement with the instrumental temperature record in the absence of the anthropogenic forcings is somehow because of how they have been tuned. I argue that there is likely no way to get good agreement using only the known natural forcings simply because those forcings do not have the right time dependence (and, indeed, the studies that have tried it have said that they were unable to get good agreement using the known natural forcings).
Again, it depends on what constraints you had on the model-building. If you were able to just write up some empirical model that just fit time-dependent functions with various parameters to the instrumental temperature record, then like I said, much more than 4 parameters is probably overkill. However, that isn’t what the climate models do. The parameters in climate models are, as you noted, parameters on various processes and, try as you might, I don’t think you will be able to tune them to produce a global temperature trend that matches the instrumental record using only the natural forcings as known because these forcings simply don’t have the right time-dependence in them.
It is also worth noting that the data that the climate models are checked against is a lot more than just the global temperature record. The latest IPCC plot now does a comparison with the temperature on each of the continents individually (except Antarctica because of lack of data). And, in the literature, there are apparently several “fingerprint” analyses that look at the pattern of the warming compared to the predictions.
Well, as I noted, from the observation data from the last glacial maximum, you can basically do the computation without the global climate models because you are going between two essentially equilibrium states. From other observational data, you do have to use models to try to determine what equilibrium climate sensitivity provides the best agreement between models and data. This is using models but not to actually predict the climate sensitivity but just to provide the way of getting from the observation data to the climate sensitivity.
jshore, I guess I’m missing something here. Others have shown, and my own analysis has confirmed, that during the last glacial maximum (LGM) the changes in CO2 lagged the changes in temperature, by some hundreds of years.
Perhaps you could explain exactly what this teaches us in terms of the ice age climate sensitivity to a change in CO2 during the LGM, as you state above.
And as I noted, there are plenty of non-model studies that give very wide ranges for sensitivity figures – ranges that includes low figures. (And that’s just looking at the “key studies” chosen at the discretion of the IPCC.) Low sensitivity is consistent with the existence of a good sized negative feedback.
So I don’t see how you can claim that aside from models, the evidence as a whole suggests that such a feedback doesn’t exist. The very best you can say is that aside from models, there is evidence both for and against the existence of such a feedback.
They give probability distributions that may have a non-zero probability of a low sensitivity…but a very low one. Hence, the IPCC statement that a sensitivity below 1.5 C is very unlikely (<10% chance) and that the sensitivity is likely (>67% chance) to lie between 2 C and 4.5 C.