Global warming has stopped?

Even slower than that, I think. If one looks at the ice age – interglacial cycles, I believe that the typical rate of warming out of an ice age works out to about 0.1 C per century.** And, the descent into an ice age is even slower than the ascent out of one. By contrast, the current rate of warming that we are seeing is close to 0.2 C per decade.

Also, the latest thinking on the current interglacial based on the understanding of the earth’s orbital oscillations is that, left to its own devices, it probably would have lasted another ~40,000 or 50,000 years, I believe. (There is one counter-hypothesis by Ruddiman that we would have started to descend into an ice age already if not for the greenhouse gases we started emitting ~8000 years ago when we switched toward more agrarian societies. However, this hypothesis if correct would seem to require a climate sensitivity to greenhouse gases that is higher than generally accepted…which would mean we are even more screwed in terms of the expected temperature rises due to the greenhouse gas levels we are now approaching.) Finally, it is generally accepted that the amount of greenhouse gases we have put into the atmosphere is more than enough to stave off an ice age until the levels come down again, which will take many thousands of years for the CO2.
(**Which does not mean the climate system can’t bounce around more rapidly than that due to internal variability…in fact we know that there can be differences in global temperature of greater than 0.1 C from one year to the next. But, the average rate of sustained change over the period it took to go from the ice age to the interglacial or vice versa is much slower. There are also cases where much more rapid shifts in climate have occurred but these seem to be regional shifts where, say, one hemisphere warmed while the other cooled. Unfortunately, however, it is generally believed that putting any sort of significant forcing on the earth’s climate system, such as we are doing now, makes such sudden climate shifts more, not less, likely to occur.)

Well, I’ll try to watch that video of Bob Carter’s presentation when I get a chance. My guess from what I know of Bob Carter is that it is pretty much the standard litany of arguments that “skeptics” use and we have addressed in various threads before. In fact, I believe that Carter is pretty much the inventor of the “global warming stopped in 1998” meme. Here and hereis a little background on Carter. He is a well-published marine geologist studying ocean sediments but not well-published in the field of climate science…or at least most of the areas he generally talks about. (His ocean sediments research does apparently involve some paleoclimatology.)

I picked out one study mentioned in that section: Lindzen & Giannitsis 2002, which I chose because I recognize Lindzen’s name as being on the skeptical side of the global warming debate.

Here’s him describing his study in his own words:

Looks to me like his publication argues for low sensitivity on the basis of Pinatubo.

I’m not trying to say that Lindzen is right and the other people are wrong – just that your statement should have been qualified.

Again, I don’t think a vague notion that we have the technology means it is necessarily ready for prime time, especially on a massive scale and for a reasonable price.

As for the jet engines idea in particular, I don’t think this would have the desired effect. The latest IPCC report estimates the net radiative forcing effect from linear contrails as they currently are to be somewhere between 0.003 and 0.03 W/m2. This is down by 2 to 3 orders of magnitude from the radiative forcing effects we would want to counter from CO2. Worse yet, it is…at least within these estimated error bars…a positive forcing and thus acts to warm, not cool. (In other words, apparently it is believed that the warming effect of such contrails due to their reflection of infrared radiation back toward the earth slightly outweighs the cooling effect due to their reflection of incoming solar radiation back into space.)

The sulfate aerosols approach is more promising in the sense of acting in the right direction and having a reasonable magnitude. But, again, the feasibility and cost effectiveness have barely been studied…and there are issues with how well it would really cancel out the warming (e.g., the geographic distribution of the warming) and what the effects on precipitation would be. I tend to think of these things as “swallow the spider to catch the fly” approaches…In other words, approaches where you are likely creating other significant problems in the process. Much better are probably the approaches of either reducing or sequestering emissions (or scrubbing out CO2 we have already released into the air).

Another problem with the “geoengineering” approaches is that they would not address the issue of ocean acidification caused by our CO2 emissions.

The cite provided by jshore specifically discusses discrepancies between the work of Lindzen and Giannitsis work and other scientists - not only between their results, but between their methodologies and handling of data, with an indication of why their conclusions diverge. No disclaimer should be necessary for somebody who read and understood that discussion.

I don’t understand what your point is, since jshore did not mention Section 9.6 until I asked him for a cite.

Anyway, this:

is a bit of an overstatement. He should have said “some interpretations” or “many interpretations”

Well, how about, “the current generally accepted interpretation”? I.e., it is not completely unanimous but it is the interpretation of most peer-reviewed papers in the field. And, the IPCC report gave reasons to prefer these studies over the (few) others that arrive at a different interpretation.

Well, to start with the OP’s question, “Global warming has stopped?”, we don’t know. The temperatures have not been rising in the first few years of this century at the rate they did last century, but it’s early days and we don’t have anywhere enough data to say anything definitive yet. Well, that’s not entirely true. We can say that the trend over the last decade or so (wheter or not we start in 1998) is not statistically different from zero … but that’s not saying much at all other than to say we don’t have enough data yet to know.

On another matter, climate sensitivity, brazil84 asked:

jshore, thanks for your citation to the IPCC Chapter 9. I find it of interest that none of the “evidence” for the size of the ECS (equilibrium climate sensitivity) was actually evidence. It was all the results of computer modeling. Here in part is what they say:

So, here we are, decades down the track, and we still can’t determine the upper bound to within 100% … how about the lower bound? The IPCC says:

So, we’re not doing any better there, still a 100% disagreement in the “consensus” …

You say:

where the truth seems to be that

In other words, both you and the IPCC are talking about what happens on Planet GCM, rather than talking about what happens on Planet Earth.

The main difficulty in the models is modeling the clouds. The GISS ModelE GCM contains more than 20 different “parameterizations” of clouds, including:

• Cloud scattering asymmetry
• 3D cloud heterogeneity
• Prognostic cloud optical properties
• Mean cloud particle density distribution
• Cloud overlap
• Cumulus updraft mass flux
• Cumulus downdraft mass flux (calculated, curiously, as 1/3 of updraft mass flux, whoa, that sounds scientific to me)
• Cumulus downdraft entrainment rate
• Maximum cumulus mass flux entrainment rate
• Convective cloud cover
• Stratiform cloud volume fraction
• Cloud areal fraction
• Cloud-top entrainment
• Cloud droplet effective radius
• Cloud droplet maximum effective radius
• Cumulus updraft speed profiles
• Cumulus updraft entraining plume speed profiles
• Proportion of freezing rain vs. snow
• Stratiform cloud formation location
• Amount of water in liquid phase stratiform clouds
• Evaporation (sublimation) of stratiform precipitating water droplets (ice crystals)
• Precipitation attenuation rate
• Threshold relative humidity for the initiation of ice and water clouds
• Radiation absorption in clouds
• Secondary aerosol effect
• ALL SUB-GRID PHENOMENA, such as thunderstorms (the main mechanism for moving warm tropical air aloft and one of the most important parts of the climate system)

Modeling clouds is unbearably complex, so there’s a host of approximations and simplifications (parameterizations) necessary to make the models work. The bad news is, the clouds control the albedo, and the albedo functions as the throttle on the climate system. The albedo (and thus the clouds) controls the energy entering the climate system in precisely the same manner that the gas pedal on your car controls the energy entering your engine.

Thus, modeling the climate but parameterizing the clouds is like trying to model an automobile but parameterizing the gas pedal. Regardless of how well you have modeled the other parts of the automobile, the speed of your model car will be controlled, not by basic physical principles, but by the setting that your parameters have given to the gas pedal. And the temperature of your model planet will be set by the parameters controlling the clouds.

So you’ll have to excuse me if claims about the speed of a computer modeled car (and your claims about computer modeled sensitivities) don’t impress me much … because when you parameterize the gas pedal, all bets are off.

w.

PS - in case you are laboring under the mistaken idea that the modelers set the “parameters” to something akin to their physically determined, experimentally observed values, consider this quote from the folks at GISS (large PDF) say about their own model:

Now, here we need an English-to-English translation. What they are actually saying is:

“Our cloud model is setting the albedo far too high, and also the TOA radiation is not in balance. So we’ll just twist the tuning knob to adjust the individual rates at which ice and liquid clouds form until we get the albedo and TOA balance right, and we won’t worry that as a result, such things as TOA cloud DLR and total cloud area end up way wrong.”

Because as a result of this procedure, the GISS cloud coverage is set to 59%, which is just fine for Planet GCM … but here on Planet Earth coverage is much higher, about 69%. And strangely, this huge error, along with a correspondingly great error in TOA cloud DLR, make absolutely no difference in the ability of the GISS model to hindcast the 20th century trend. Heck, they can’t even get the TOA radiation to balance without fudging the cloud coverage numbers, but yes, it’s a valid, scientific model based on physical principles …

I leave it as an exercise for the student to determine why a huge (~ 25 W/m2) error in cloud coverage, which is about the predicted effect of the CO2 going to, not double the current value, not triple, but a hundred times the current CO2 value, does not affect the GISS model’s ability to hindcast last century’s temperature trends. If you need a clue, consider the word “tuning”.

And I leave it as an unanswered question why we do more Verification and Validation and Software Quality Assurance on the software that runs our high-rise elevators than we do on climate models that some people want us to base billion dollar decisions on …

It’s hard to see whether the IPCC is actually claiming that. For example, look at Table 9.3, which refers to “key studies.” How is a “key study” defined? Why is Lindzen’s left out?

Even the key studies don’t rule out the possibility of non-catastrophic AGW. For example, suppose that the climate’s sensitivity to CO2 turns out to be 1.25C. We can most likely handle that, just as we handled a 1C rise over the last hundred years or so.

However, if the climate sensitivity were 1.25, the IPCC would be able to say that the sensitivity was with the ranges set forth in more than half of their “key studies.”

Well, I think they pretty explicitly spelled out the issues with Lindzen’s study.

Well look, if you want complete certainty in life, you better stick to mathematics where you can rigorously prove things. (Unfortunately, however, what you rigorously prove will only tell you about your mathematical system and nothing about the real world around you.) The fact is that there will always be some uncertainty in science. The IPCC’s current judgement is that a climate sensitivity of less than 1.5 C is “very unlikely” means they estimate there is less than a 10% chance it will be that low. But if your criterion is that you want to know a lower bound below which there is zero probability then you’re out of luck.

And, note that these climate sensitivity numbers are the numbers for doubling. We have enough fossil fuels, especially coal, to way more than double the CO2 levels, so it is not as if having a lower climate sensitivity gets us completely off the hook…It just means that we have a little more time to get emissions under control. It is also not as if the fossil fuels are going to disappear. If future evidence, against all odds, suggests that things won’t really be so bad, we will have some cheap fuels that we can continue to use. It seems pretty silly to base our policy on the hope that one of the more unlikely scenarios on the low end will come to pass.

And, on the other side, what if, as James Hansen argues, the climate sensitivity is being underestimated by some of these calculations? Then, we will be really screwed. If you want to talk about the possibilities of the scientists being wrong (i.e., the correct result being in the tails of the likelihood distributions), you have to consider the possibilities in both directions. I personally am cautiously hopeful that Hansen has gone a little off the deep end in his most recent arguments (that I think say the equilibrium climate sensitivity might be more like 4-6 C once the longer term processes likechange in albedo due to melting ice sheets and sea ice are properly accounted for)…and I think there are reasonable arguments being made by those “in the middle” like James Annan that Hansen’s arguments do have some flaws, but Jim Hansen is an awful smart guy to be banking on being wrong!

Thanks, as usual, for your interesting perspectives, intention.

Agreed…although just to clarify, it isn’t statistically different from the rise of ~0.2 C per decade that we had been seeing either. The basic point is, as you noted, that it is impossible to determine the trend over an insufficiently short period of time and for the climate system determining the trend seems to require a bit more than 1 decade of data.

No…What they are doing is comparing computer models to observational data, which is presumably exactly what they need to do in order to determine if their models are getting the climate sensitivity correct.

…Which is all the more reason not to sit on our hands and do nothing. The fact that the upper bound is particularly hard to constrain should evoke more caution and alarm, not more doubt that we should take any action regarding emissions.

Yes, but that is down in the tail of the distribution. Again, while it may be nice to know whether the 5% percent threshold is at 1 or 2.2 C, I am not convinced that it would have really strong policy implications…unless you are really the kind who wants to bet on long shots.

No, what we are doing is using the observational data to constrain the models. I.e., we are using it to determine if the models are believable in their climate sensitivities.

Yes, clouds are complex. However, it doesn’t mean that all hope is lost. If the models were missing some effect in clouds like an “iris effect” then that would tend to show up in an inability of the models to explain past climate events like the ice age - interglacial transitions or the Mt. Pinatubo eruption.

I could apply your logic to anything that computational modeling is used for. Models are approximations to reality. By their very nature, they do not account for all processes exactly. That is why it is important to test them against what data you have, to test the various pieces (e.g., like the water vapor feedback) against observational data, and to test how results like the climate sensitivity vary with variations in the model parameters. All these things are continually being done…and most of the results tend to argue against a low climate sensitivity. (As you have already noted, the high side of the climate sensitivity distribution tends to be harder to constrain.)

Your rant here contains within it the solution to your quandry, namely as you note that it turns out that this parameter is not that important…i.e., that the climate sensitivity that you predict is not strongly dependent on getting the parameter right. If these sorts of things were not true, then science would never have advanced as far as it has because, quite frankly, all models make some significant approximations or ignore some features of the problem.

No…The answer is simply that the errors tend to cancel for the quantity of interest, which is how much increase in temperature a given increase in radiative forcing will cause. (Well, there are lots of other questions they can and do ask to, but I am keeping it simple for the purposes of discussion.) See, you seem to be laboring under the assumption that if you want to calculate the effect of a radiative forcing (i.e., a change in the radiative amount) of, say, 4 W/m2, then you have to get all radiative aspects in the model calculated to better than that amount! Man, life would be sugar and roses if one could actually do that, I agree! But, in the real world of science that is very seldom the case, particularly if you are looking at a problem that wasn’t solved by, say, the end of the 18th century!

Do you work in management, perhaps (he asks cynically)? Not every tool is useful for every problem. All that cool and groovy V&V / SQA stuff is great for software that is designed to perform very specific tasks. However, it is usually pretty useless when it comes to testing software at the forefront of science where the issues faced in evaluating models are very different. Much better to do the sort of testing and intercomparisons that the scientists are doing rather than imposing some silly bureaucratic solution on them that may work great for some things but is simply not suited for the problem at hand.

They pointed out some issues yes, but they didn’t explain what a “key study” is or why his study doesn’t count as a “key study” I am pretty confident that one could criticize just about any of the studies they cite.

Smells like cherry picking to me. Seems to me that the IPCC should have clear inclusion and exclusion criteria.

It’s not a matter of complete certainty. The point is that if one looks only at (1) estimates of climate sensitivity that are (2) based on instrumental observations and (3) blessed by the IPCC, the door is wide open to non-catastrophic AGW.

To close that door most of the way, one needs to include estimates that make use of climate models, a fact that the IPCC essentially admits.

Intention outlined some of the problems with models pretty well a couple posts ago.

Supposing for the sake of argument that expected warming from a doubling of CO2 is 1.0C. What is the warming from a tripling of CO2? Since it’s a log curve involved, I doubt it’s much higher than 1.0C.

Unlikely by what measure? To me, it seems pretty silly to base policy on computer models that have all the problems pointed out by intention. Anyway, if you want to debate policy, exactly what policy are you proposing?

For each table in their report? I think they do have clear criteria for citing papers… but they are allowed to make expert judgements on those papers.

The estimates that they talk about are based on observations. Yes, most of them still use climate models to interpret what the observations say about equilibrium climate sensitivity. This is because few “experiments” in nature are “clean” equilibrium climate sensitivity measurements…i.e., the system is not particularly close to equilibrium because of the rapidity with which they occur. Hence, one can’t simple divide the measured (or estimated) temperature change by the estimated forcing. (The one notable exception is going from the glacial to interglacial climate where the timescales are such that you can assume you got close to equilibrium.) So, yes, in most estimates, climate models are involved in the interpretation of the observational data. However, in these cases, it is not the climate model that is predicting the climate sensitivity on its own. Rather, they study how well the climate model can reproduce the observed changes and what the climate sensitivity of the model needs to be in order to do so.

It is easy to come up with ways in which all models in the sciences are imperfect representations of reality. That doesn’t stop models from being very useful. It does mean that one has to continually test the models with observational data, preferably testing various pieces (like the water vapor feedback) in addition to the whole thing, test different models against each other, and test how sensitive results are to the parameters in the model. All of this is being done.

A tripling would produce ~1.58X the temperature rise of a doubling. A quadrupling would produce 2X the temperature rise of a doubling. A factor of 8 rise in CO2 would produce 3X the temperature rise of a doubling and so forth. (All assuming that the climate sensitivity itself doesn’t change significantly as warming occurs.)

As I noted, it is more than just computer models. And, all computer models are imperfect but that doesn’t stop us from basing public policy on the best science available when that policy is not opposed by strong political interests. At any rate, since doing nothing is also a policy, we are in a situation where we have to base policy on what we know about the science. The question is whether we should be basing the policy on the hope that the scientific truth lies quite far into one tail of the distribution of probabilities or whether we should be basing it on the center of the distribution (or, best yet, on some acknowledgment of the total distribution…i.e., with a knowledge of the worst- and best-case scenarios and some ability to adjust the policy either to be stricter or more relaxed depending on how our knowledge evolves).

For each issue on which they exclude some papers and include others.

Then at a minimum, they should spell that out, e.g. “We included all papers that were sound in our expert judgment.” Not mentioning the issue at all makes it seem to me that they are trying to downplay the fact that there exists dissent.

And it seems like the models aren’t doing so well, at least according to intention. How is it possible that a model that is way off on cloud cover can still hindcast accurately? Very troubling.

Can you explain to me how you did this calculation? TIA

First things first: Exactly what policy are you proposing? Because the probabilities are somewhat moot if your proposed policy is ineffectual or impractical.

I watched this lecture of Bob Carter’s. I won’t try to exhaustively research and debunk everything that he said but will just give you the broad overview. His basic approach seems to be to take data out-of-context and to emphasize the very few papers that have appeared in the scientific literature recently that might support his point-of-view and ignore all the rest.

Specific things:

(1) He shows a lot of stuff from ice core temperature data arguing that it has been warmer in the past and also claiming to show that the temperature change has been as rapid in the past. Yes, it is true that on geological timescales, the earth has been warmer…In fact through much of its history. However, there have also been large changes in flora and fauna and huge changes in sea level over that time. The current flora and fauna and civilization are adapted to the current climate and sea level. Also, his comparisons of past temperatures and rates of change are complicated by an apple-and-oranges comparison that he makes. E.g., he takes ice core data from Greenland and Antarctica and uses it to determine rates of warming or cooling at that location and then compares them to global rates of warming that we are talking about now. There are at least a few…and probably more…problems with that. One is that at any given location, climate variations will be larger than they are over a larger region. Also, the ice cores near the poles are known to have larger temperature variations because the temperature variations are amplified as one goes to higher latitudes. Finally, he shows temperature changes and he shows rates but does not discuss how long these rates were sustained. E.g., if you project that sort of year-to-year changes that you get in global temperature (on the order of 0.15 C, say) then that would be a rate of 15 C per century but a sustained rate of that is another thing altogether. In fact, his claim that the rate of warming out of the ice age was something like 1.5 C per century is off from what I understand the sustained rate to be by more than a factor of 10.

(2) His argument about history showing that biodiversity did not suffer from such climate changes, in addition to the problems of mis-estimating the rates, also suffers from other problems. First of all, it neglects the fact that there are other human-caused stresses such as habitat fragmentation that are going to interact with the climate change and particular will tend to limit the extent to which animals and plants can migrate relative to what they could in the past. Second, his statement about polar bears having survived a significantly warmer interglacial several hundred thousand years ago is simply wrong since the genetic and fossil evidence suggests that the polar bears around today have survived one previous interglacial at best (and even that seems to be questionable, see here for discusssion).

(3) His claim that the last 8 years of flat temperatures while CO2 rose 4% disproves the hypothesis that CO2 is causing the current warming trend is incorrect as we have already discussed here.

(4) He makes a big deal about the fact that a minority report to a report on climate change mitigation prepared by the Australian Parliament argued that the scientific case for AGW wasn’t there and that this minority report was prepared by the only member of parliament on that committee that had scientific credentials. He also makes some vague statements about similar reports from the British House of Lords and the U.S. Senate. This whole argument seems rather strange to me since he is evoking these political bodies as authorities on science. In fact, he neglects to note that the important scientific bodies in Britain (the Royal Society) in the U.S. (the National Academy of Sciences), and in Australia (Australian Academy of Sciences) have all strongly endorsed the views of the IPCC report. Why are we supposed to care that Sen. James Inhofe, and some people in the British House of Lords, and the Australian Parliament think that they know better (whether or not one of the members of that parliament has some unstated scientific training)? [And, by the way, for the record…although the sample size is almost certainly too small to draw any significant conclusion…the two physicists in Congress both are proponents of action on AGW (one is a liberal Democrat so that may not be so surprising but the other is a moderate Republican, Vern Elders, from what I have been told is a very conservative district in Michigan).]

(5) He makes a big deal about a few recent papers which, as I noted, are cherry-picked because they agree with his point-of-view and haven’t been around long enough to be subjected to significant scrutiny by scientists. In particular, I happen to know that the paper by Schwartz arguing for a low climate sensitivity has already generated at least one comment submitted for publication that pretty much seems to tear it apart.

(6) He makes several very confusing statements about the paper from the British group that discusses internally-generated natural variability and argues that the recent flattening of the warming is temporary and that warming will resume soon. It is hard to know where to begin to critique these but the basic problem is that he is confusing the idea of whether such variability is included in the models (which it is) with the idea of whether the models generally predict specifically how this variability will play out (which they don’t since the realization of how it plays out is extremely sensitive to initial conditions and thus cannot be predicted very far in advance, although the realization…or at least claim…of the British group is some prediction is possible over time scales of a few years).

(7) He makes a big deal out of the recent Y2K bug found in the NASA reconstruction of U.S. temperatures and how it changed the conclusion about which year in the U.S. was the warmest without noting that the difference between the warmest and second warmest years was never…and still is not…statistically significant and that the correction of this bug made only a very tiny difference to the global temperature data (because the U.S. is only a small percentage of the entire surface area of the earth).

(8) He warns us of the threat of cooling without noting that most scientists in the community think that any cooling due to a less active sun will be pretty much overwhelmed by the greenhouse gas effects.

As I pointed out, they noted the dissenting papers and discussed them.

Not troubling at all. If you had to get everything precisely right in order to be able to do this that it would be troubling, as it would then show that the results are not robust. I am not at all surprised that a world with 59% cloud cover responds to climate forcings in nearly the same way as one with 69% cloud cover. A positive forcing is a positive forcing whether the cloud cover is 59% or 69%…The worst I would expect it to do is change the response by some small amount.

To get how much more increase you get for a tripling as compared to a doubling, for example, you take log(3)/log(2). [It doesn’t matter what base you use for the log as it cancels out.] To get how much more you get for a quadrupling as compared to a doubling, you take log(4)/log(2), which by the properties of the log function you can says is 2.

Well, I didn’t really want to get into a discussion of policy here but more a discussion about how science should be used to inform public policy decisions. I agree that, in theory, you can have a problem that the science says is likely to be serious but you can conclude that your best approach is to do nothing or just “adapt” (if, e.g., the costs of doing something to prevent it is prohibitive). I don’t think that is true in this case…but that is beyond the scope of the current discussion.

And as I noted, they never spelled out why some papers were included and others excluded. One can make guesses as to why they did so, but that’s not the same thing.

If the IPCC is going to ignore (or downplay) part of the scientific literature, they need to explicitly say “We are ignoring (or downplaying) this because _____”

In my opinion, it’s not enough to point out a few criticisms and leave the reader to guess. You apparently feel differently.

Are you saying that you are confident that an across-the-board 15 percent increase in cloud cover would have little effect on global temps?

And that depends on large part on what exactly the policy decision is. In my opinion.

No, what I am saying is that the effect on the climate of rising greenhouse gases would likely be only modestly different in these two worlds with the different cloud cover. (Perhaps the most naive guess that one could make is that the difference in the climate sensitivity for these two worlds might be on the order of 15%…although it certainly could be more or it could be less.)

That is the important thing to understand…i.e., that what one is always looking at is the difference between the climates in an unforced run and a run with greenhouse gas forcings. So, you don’t have to get all the processes correct to very high accuracy…You just have to get them to good enough accuracy that any change in the processes due to the change in forcing (and resulting change in climate) is captured roughly by the models…or is too small to make a very large difference. I’m not saying that this is completely trivial to do, but it is certainly much, much easier!

jshore,
Thanks! Unfortunately, I’m out of time at the moment and can’t initiate any followup, but I wanted to let you know I appreciate you taking the time.

I understand now. But it seems to me the problem with the models goes deeper than that. What if there is a relationship between warming and cloud cover? In that case, pegging the cloud cover at 59% has the potential to foul everything up.

Also, like intention asks, how is it possible that a model that is so wrong in terms of cloud cover is able to accurately hindcast 20th century temperatures? Either cloud cover is unimportant or the model has been overtuned.