How many people take Global Warming totally on faith?

Ok, so you believe that there is essentially no benefit to testing a climate model ex ante versus ex post.

Why doesn’t it make sense?

Do you agree that climate modelers sometimes test their models against historical data? (Yes or no)

Do you agree that sometimes those tests are positive and sometimes those tests are negative? (yes or no)

Do you agree that all results should be disclosed whether positive or negative? (yes or no)
Three extremely simple yes or no questions about climate models.

Before you tell people that they have no idea what they are talking about, you might learn about it yourself. Regarding the GISSE model, the programmers say (emphasis mine):

In fact, one of the things that shows that the models don’t represent reality is the fact that as resolution increases, the skill of the model decreases .

Look, I understand that you have faith in the models … but it’s just faith. From the description of the GISSE

In the GISSE global circulation model, the modeled radiation forcing in the tropics (40% of the world, and where the majority of the radiation is absorbed) is off by a whopping 20 W/m2; cloud cover (arguably the most important factor in climate) is estimated by the model at 58% whereas in fact it’s 69% (about a 35W/m2 error in albedo); the model has never been subjected to V&V or SQA; the potential energy of water vapor/condensate is neglected; angular momentum due to drag and pressure torques at the solid land surface is ignored; the change in ocean radiation emissivity due to wind velocity is not calculated … and yet you actually think the model can be trusted to reliably forecast the effect of a 3.7 W/m2 change in forcing??? The error in the clouds alone is ten times that large.

Like I said, it’s lucky you have faith, it gives a warm, fuzzy feeling … I’m just overjoyed that you aren’t in charge of approving the software that controls airplanes or subways. In mission critical software, faith is not sufficient.

w.

Fine…So, there are exceptions to the rule and it is a little more complicated than what I say. Gavin’s point that resolution increases alone are not the end-all and be-all is well-taken. Still, he is not saying that resolution increases are not overall a good thing…you just want to make sure that you improve the physics in the parameterizations too. And, since the parameterizations are approximations, they sometimes will give a worse result when the model is run at higher resolution.

For completeness and because I think remarks such as this always make more sense when seen in the fuller context of what was being talked about, here is the entire paragraph from which you quoted:

And, you know, if quantum electrodynamics had controversial policy ramifications, we’d have people yelling about how one can possibly trust a theory in which you have to subtract two diverging quantities in order to get a real physical quantity like the mass of a particle!!! Welcome to science at the forefront, my friend, where the theories and models are never perfect and where one single piece of evidence is seldom clear-cut when viewed all by itself.

The fact is that when at work I model the spectrum of a device, I will often use it to predict changes in the spectrum with device structure that are considerably smaller than the error between what my model predicts for the spectrum of a device and what the experimentally measured spectrum looks like. And, in fact, my predictions will almost invariably be correct.

One doesn’t have to get all the radiative processes down to lower than the 3.7 W/m2 change you are looking for. One just has to have reasonable confidence that the errors in those quantities don’t change by very large amounts as the perturbation we are studying (in this case, increasing CO2) is turned on. The fact that a variety of models (and generations of an individual model) with a variety of different parametrization schemes and a variety of different strengths and deficiencies all predict similar climate sensitivities (within an admittedly fairly broad range) is one piece of evidence that we can have confidence that this is so.

You may call that faith. Those of us who are actually trained scientists who have worked in other fields (where there is not so much of a politically-charged atmosphere) call it science.

Well, I never claimed that climate modeling was so completely on the extreme that it couldn’t benefit at all from “ex ante” testing to give us more confidence, but I don’t think such a distinction is vital…and I think that there is a lot lf “ex post” stuff you can do (such as looking at the Mt. Pinatubo eruption) that is really essentially “ex ante” because although the data may have existed when the model was being developed or updated, it hadn’t in any way been used in developing the model.

Yes.

Most likely…although I think it is seldom completely black-and-white.

As I have said before, I think it is good to discuss both the strengths and deficiencies of the model in reproducing historical results as well as just basic present day climatology. This is what Gavin has done in describing the GISS model and even though it gives people like intention with an agenda an opportunity to try to score debating points, I still think it is the right thing to do.

I also think it is useful to describe things like, “When we tried this other parametrization scheme…” or “When we varied the parameter over this range…”, “we saw these results instead”.

However, as I have also pointed out, it is not like the situation you have in doing purely statistical studies where you actually radically change what one would conclude by whether you include the negative results. I.e., there is nothing akin to the dangers in this sort of modeling as there is in running twenty studies on a drug and only reporting the one study where the drug did statistically-significantly better than the placebo (which would no longer be statistically-significant if you included all the failed study data too).

Ok, but the only benefit is psychological?

That’s unfortunate for various reasons that I will not get into.

I’m a little confused. Is that a yes or a no?

If a model is tested and fails, should the failure be disclosed or not?

Or, if you prefer not to see things as “black and white,” let me ask it this way:

If a modeler tests a model, should the results of that test be disclosed?

Yes or no?

Edited to add: My questions refer only to climate models of the type we have been discussing so far. So there’s no need to address other types of models. I realize you would apply different standards to different types of models. My question deals only with climate models.

I’m not sure I’d say that is 100% completely true, but I think for all intents-and-purposes, I’d pretty much agree with that.

Well, it would be nice if you would actually state things that you believe rather than engaging in this kind of socratic questioning exclusively.

Well, I think in general, yes. But I don’t think it is something to be legislated or anything like that. I will leave it in the hands of the individual researcher to decide whether he thinks the results are important and noteworthy enough to warrant discussion and how best to discuss this. (The referees also have a hand here…and I as a referee have suggested tests that I think would be useful before.) However, I would also leave it up to the researcher to decide what is the best way to discuss these issues. A good scientist is always looking for ways to test the limitations of his/her model or theory.

I’m happy to tell you why if you are curious.

Why?

Yes. I am.

A little more complicated than you say? A little more? I love the “it’s all simple, it’s just basic physics” tack that people take about climate science. The climate is the most complex, and most poorly understood, and largest, system that we have ever tried to model. No, it’s not “a little more complicated” than what you say, it’s immensely more complicated than our tinkertoy models.

Nor, as my citation showed, can the problem be fixed simply by improving the parameterization as you and Gavin claim. It is inherent in the models. Read the freakin’ citation. Turbulent systems are notoriously difficult to model, and resolution is one of the bugaboos of doing it.

And your evidence that gives you “reasonable confidence” that the cloud coverage doesn’t change by even 1% as the temperature changes would be … ???

jshore, a 1% change in clouds gives a change in radioactive forcing equal to a doubling of CO2. If you have evidence that there is no feedback that has a 1% effect on the clouds, bring it on. Richard Lindzens “Iris Hypothesis” has recently been supported by this paper. Is his hypothesis true? We don’t know. I know you don’t like to say that, but it’s the truth – we don’t know, just as we don’t know the size and in some cases the sign of a number of climate feedbacks.

And not knowing about this or a host of other feedbacks, your foolish assumption that not only the clouds but all of the climate is like your spectrum device, that it just sits stably or reacts predictably enough to ignore the huge model errors, is quite touching … but it is not science.

The models get the clouds wrong precisely because we don’t understand all the feedbacks, forcings, and resonances that affect the clouds. The idea that we can hold the clouds constant and thus derive the effect of increasing CO2 is laughable.

Yes, as you say, in a simple stable system which is well understood, it’s not necessary to get the details exactly correct. But the climate is a poorly understood, complex, chaotic, driven, constructal, optimally turbulent, multi-stable system, with dozens of forcings, feedbacks, and internal resonances, both known and unknown. It is made up of five subsystems (ocean, atmosphere, lithosphere, biosphere, and cryosphere) which all interact with all of the others in unknown ways. Correct me if I’m wrong, but I doubt very much if the same is true of the devices for which you are calculating spectra. You are analyzing a simple device built by humans, not an incredibly complex natural system. My guess would be that there is no significant turbulence in your system, for example, to take one of the many difference between machines and nature.

You would not fly in an airplane whose computer system had not been subjected to V&V and SQA. You would not ride in a subway whose computer system regularly had huge errors. But you want me to spend billions of dollars based on an untested, unreliable computer climate models … thanks, but I’ll pass until you subject the models to the normal scientific testing procedure that every other piece of mission critical software has to pass. That’s science, and we use it on everything from space probes to submarines.

Until the models are tested and verified, it’s not science, it’s just faith. Sorry to break the news, but your being a “trained scientist” doesn’t make you any less susceptible to faith than anyone else. Science is a way of looking at the world, not the result of a course of study. The fact that a person has a PhD in science does not necessarily make them a scientist.

w.

PS - the fact that a variety of models give us similar results for climate sensitivity only shows that they are tuned to reproduce the past climate record. Given the assumption that the temperature changes are due to CO2, even the simplest model will give that result. Here’s an extremely simple model – we assume that the post 1980 warming of 0.25° is all due to CO2. Given the change in CO2, that works out to about 2°C/per doubling. Does that prove anything about climate sensitivity? No, because we are assuming that all of the warming is due to CO2.

Because, ideally in a scientific test, the line between success and failure should be drawn before the test is done.

Now could you please answer my question:

Why do you think that a modeler should disclose the results of all tests, whether good or bad? What’s wrong with just disclosing good results and throwing the bad results down the memory hole?

The more I look, the more I see that they are indeed tested.

http://www.iop.org/activity/policy/Publications/file_4147.pdf

  • Professor Thorpe. (currently the Director of the Natural Environment Research Council)

GIGObuster, thanks for the citation. If you are convinced that

[approximately replicating the historical trend] = [testing a model]

there’s not much we can do to help you. However, on the off chance that you’d like to learn something rather than continue with your error, you might start with Discussions of Application of Verification, Validation, and Quality Assurance Procedures to Climate Modeling Software. For a discussion of some of the mathematical problems with models which I alluded to in the discussion with jshore above, you could see Time Step Sensitivity of Nonlinear Atmospheric Models: Numerical Convergence, Truncation Error Growth, and Ensemble Design, along with part 1 and part 2 of a discussion of that paper.

Finally, there is a bibliography about Verification and Validation (V&V) and Software Quality Assurance (SQA) here.

The short version is that replicating the past is not testing of a model. There are plenty of stock market models that can replicate the past quite well … but all of them are useless at predicting the future. As the stock brokers famously say, “Past performance is no guarantee of future results” … and the same is true in all chaotic systems, including the climate.

My best to all,

w.

This is particularly so if models that do not pass the “hindcast” test can be quietly discarded. And if source code is not 100% disclosed at the time of publication so that popular models can be tested against future data 5, 10, or 20 years after they are published.

Loss of Arctic ice leaves experts stunned.

Old news. Or I should say, incomplete news.

The long version is that you are demonstrating that you do not know anything about what NASA is doing with V&V and no, since physics are the main thrust of the modeling, the testing is not just approximately replicating the historical trend.

And you seem to have missed the New Scientist’s article already posted:

http://environment.newscientist.com/channel/earth/climate-change/dn11649

Are you saying that this is evidence of AGW?

There’s really no way to verify that this is true. Because obviously if somebody has developed a computer model that predicts the stock market well enough to make money, they will never ever disclose how the model works.

By now, I would have expected to see a computer model made by skeptics that follows the physics, or I should say the physical properties, that the early researchers used in their models to show now a repeated pattern that could be reproduced to make everyone aware that the applied physics and formulas were mistaken.

One of the scientists quoted in the article says so. At any rate, it definitely is evidence of GW, A or not.