How many people take Global Warming totally on faith?

And, here we see the crux of the problem:

(1) You are setting standards for what you believe to be correct practice in science that are not the standards that actually exist. Maybe they are the standards that you think ought to exist…but then, who are you to decide?

(2) You are listening to only one side when complaints arise about people refusing to provide data or code or whatever.

As one example of the first issue above: You have arbitrarily decided that you will only accept evidence of testing on future data, which conveniently means we will have to wait a long time for verification since the future is…well…the future and even intention will tell you that because of fluctuations it takes more than a decade’s worth of data…probably more like two…to do any sort of meaningful comparison.

And, when it is pointed out that there has been enough time to see how Hansen’s predictions from 20 years ago fared, you come up with excuses not to believe the results. (E.g., you demand that the original code be re-run rather than looking at the results that were produced back then and picking the emissions scenario that came closest to the actual forcings that materialized.)

As a second example, you have determined that everyone must release their source codes even though this is simply not standard practice in most fields. Replication in the physical sciences means that you follow the procedures that the original paper has outlined and see if you get similar results. It has never to my knowledge meant that you check the other person’s code line-by-line and/or requiring that the procedure be precisely mathematically duplicated. [As it turns out, I believe that Mann has now released his code although the NSF was very clear in stating that there was absolutely no requirement on him whatsoever to do so.]

One other point I should make here since you keep making references to what I assume to be Michael Mann’s behavior: It is worth noting that Mann’s temperature reconstruction is not what we are (or at least I am) talking about when we refer to climate modeling. It is not a climate model; it is a proxy temperature reconstruction. As such, it is much more of a statistical process of fitting to data and issues such as tests of robustness to having certain data included or not included do become much more important. (However, despite what others have claimed, Mann et al. have always discussed these issues, even going back to this paper.)

Here’s an article on what’s known as the “Scientific Method,” with a few excerpts:

So you see, the basic concepts of prediction, disclosure, and reproducibility are not stuff that I just made up.

If there is another side to the story, I am happy to listen to it.

Please show me which post of mine says that.

It’s far from clear that Scenario “B” was the closest.

(As a side note, anyone who is asking the world to spend gigadollars had damn well better release every last shred of information, including source code.)

But please give me a cite that release of source code is not standard practice in physical science modeling outside of climate modeling.

Please give me a few links then.

Does that mean 3 examples will suffice? 5?

I never claimed that you made up the basic concepts. What I think you have made up is how these concepts are translated into actual practice in the real world.

Just out of curiosity, do you accept Big Bang Theory, which talks about the past and not predicting the future…or evolution, which also doesn’t make predictions about the future which are testable on reasonable timescales? (Yes, one can see evolution of drug resistance in bacteria and such, but as creationists like to point out there is still a big jump from that kind of evolution on a small scale and the development of entirely new species and families and classes of plants and animals.)

Say what? Both the Big Bang theory and evolution make testable predictions. The Big Bang theory predicts the cosmic background radiation, and the theory of evolution predicts observed phenomena such as color changes in moths, divergence in DNA over millenia, different beaks in Darwin finches, and (as you mention) drug resistance in bacteria.

Einstein’s theory predicted aspects of the orbit of Mercury which had never been observed, but which had been going on since Mercury started orbiting. Was this a prediction about the future, since it had never been observed, or about the past, since it had always been there? Makes no difference. The issue is not past or future. It is testable predictions, either about phenomena which have been or will be observed.

It is worth noting, however, that hindcasting the past with a model tuned to hindcast the past is neither a test nor a testable prediction …

w.

No, how the concepts have been translated into actual practice is the PROBLEM. And I wish I could claim credit for being the first to notice.

I accept the Big Bang Theory with some skepticism. I wouldn’t be shocked if it turned out to be dead wrong.

I am more confident in Evolution. Actually, evolution is a good example of a theory that is widely accepted by reasonable people even though it has not been thoroughly tested. To me, that’s because it is simple, powerful in its ability to explain the facts, and most importantly, there is no other reasonable explanation that I am aware of.

See, contrary to the strawman you have set up, my position is not that prediction is an absolute necessity to do science.

http://www.physicstoday.org/vol-60/iss-1/72_1.html

http://celebrating200years.noaa.gov/breakthroughs/climate_model/welcome.html#descendents

http://gristmill.grist.org/story/2006/11/19/51921/827

Hindcasting comes later, as another way to test the models, it is not the main way to test them.

GIGObuster, your habit of quoting press releases and miscellaneous web sites leads to the following kinds of foolishness:

I thought you guys were claiming that the models weren’t “exhaustively tuned” …

No, it has not. This claim is wrong on two counts. First, the models predict, not that the troposphere will warm, but that it will warm faster than the surface. Second, both the HadCRUT3 and GISS datasets show the surface as warming faster than the troposphere, using either the RSS or UAH troposphere data.

As I have pointed out numerous times, both our measurements and our models of outgoing infrared radiation are far too inaccurate to indicate any such imbalance.

The warming of the surface has not been continuing, there is no warming in the recent years. And there is absolutely no evidence that the warming has been accelerating.

Think I’m wrong? Provide scientific citations for your unsubstantiated claims.

w.

Where were all these predictions published? Why does this article mention predictions about the Arctic, but not about the Antarctic?

GIGObuster, you say the models have been tested, so let’s see how well the models do at hindcasting. Here’s a comparison of the control runs for sixteen climate models.

Forget about trends, forget about what the temperature will be, they don’t even agree about what todays global temperature actually is. The difference between the high and low models is 9°F!!.

This is a result of the tuning problem. If your model is not correct and you have to tune it to correct it, something’s gotta give. As I showed above, if you tune for albedo and outgoing radiation, the clouds are wrong. If you tune for the trend, the absolute temperature is wrong. You can’t get all of it right.

As the graph shows, they can’t get today’s temperature right within 9°F, but you believe that they can predict a 1° temperature rise a century from now? Get real. Put your name down on the list of those that take global warming on faith.

w.

As you well know, the U.S. Climate Change Science Program’s first report concluded that the temperatures had, to within their error bars, been reconciled on a global scale. There is still a discrepancy remaining in the tropics…although they believe it is quite likely due to problems with the data:

Note that the U.S. Climate Change Science Program was created under the Bush Administration.

I seem to recall that there have been significant problems identified with the moth research, the finch beak stuff was presumably some of what went into Darwin’s formulation of the theory and as such is not an after-the-fact test, and as I noted the drug resistance in bacteria just shows that microevolution can occur within species. It is a far cry from showing how algae can evolve into humans!

So, what about all the detection and attribution work that has looked at the warming that has occurred and compared it to the fingerprint predicted by the models? Even if this some of this data existed while the models were being formulated, there is no evidence whatsoever that it was used to “tune” the models. I think you have different standards for different areas of science depending on whether or not you are inclined to believe or disbelieve the conclusions reached.

Here.

You: “There is plenty of evidence in this very thread. Evidence that climate models are tested on historical (as opposed to future) data…”

It is clear that the radiative forcings due to the adding greenhouse gases in fact aligned quite well with Scenario B. Since it is this radiative forcing that is relevant for a well-mixed greenhouse gas, that is the relevant thing to compare to. It is also worth noting that Hansen apparently noted that Scenario B seemed the most likely but then purposely had two other less likely scenarios bracketing it.

Well, the National Science Foundation seems to disagree with you on issues on intellectual property of this source code. (See the e-mail from the Dr. David Verardo that Mann quotes in his letter I linked to above.) Perhaps it is because they recognize that there is little to be gained and a lot to be lost by asking scientists to do this.

Here is an article that discusses the issue in a general context. In fact, they note the obvious fact that many employers don’t allow their scientists to release their codes. I have written papers in the refereed physics literature in the past few years using a model that I could (and probably would) be fired by my corporate employer for releasing. While government labs and universities have a little less onerous sorts of policies, they are not that much less onerous, as the article I linked to makes clear.

Like the author of that article, I am personally in favor of trying to facilitate more openness when possible…but there are good reasons why we can’t go to the extreme of having scientists forfeit their intellectual property completely.

I don’t see how you get from that plot that the Southern Hemisphere cooling is more than the Northern. Both of them are not very pronounced but if you forced me to say which one was more pronounced, I don’t think I’d choose the Southern. In fact, Hansen’s northern hemisphere data is the one with the most pronounced cooling of all the data sets…and since it is skeptics who have harped on this cooling for so long, I am interested that you are now doubting this data.

The “myriad differences”?!!? Actually, those data sets are look pretty damn close to me. This is one of the most significant differences between them that I can see and it is in a feature that was pretty subtle even in the data set where it was most pronounced.

Did I say they weren’t reconciled within the error bars? The trends during the overlap period, January 1979 to July 2007, are as follows:

HadCRUT3: 0.24 ±0.05°C/decade
GISS: 0.19 ±0.04°C/decade
MSU RSS: 0.18 ±0.07°C/decade
MSU UAH: 0.14 ±0.07°C/decade

As you can see, due to the shortness of the record, the uncertainties are quite large. Because of this, the temperatures are reconciled within each other’s error bars … and I didn’t say otherwise.

However, as I did say, the models predict the troposphere warming faster than the surface. Despite GIGObuster’s claim, there is no evidence that this is so. Both tropospheric trends are lower than both surface trends.

w.

http://www.marshall.org/article.php?id=312

The makers of that had this to say (and it seems the report is from the year 2000):

http://www-pcmdi.llnl.gov/projects/cmip/overview_ms/ms_text.php

Of course, all that sounds to me like testing, but I guess you have to continue to deny this is taking place.

What I see is that they are constantly verifying the accuracy of the models, and you are confusing the testing (that damn word again!) with the discrediting of them. What I can deduce, is that this shows efforts to identify the less reliable models and the phasing off of models that had to adjust for data not available that was deduced by other means.

Incidentally, the new NASA missions that used IV&V, are going up presisely to get the data that modelers had more trouble getting, and the new models will get the data from those missions plugged in. What is clear to me is that more evidence is coming soon that will make the lives of skeptics even harder.

My point remains. The aerosols were almost entirely in the Northern Hemisphere. Accordingly, we should see the NH cooling, but not the SH. We don’t see that.

Regarding Hansen, his data checking skills have recently been shown to be inadequate. He has refused to reveal the code used to calculate the global averages. Each time that he has revised the data set, the temperature drop from the 1940s has grown smaller, and the overall trend has increased.

It takes a real leap of faith to blindly trust the data of a man who has done all that.

Your comment illustrates perfectly the difference between science and faith. Faith looks at a couple of graphs, and says “looks pretty damn close to me”.

Science, on the other hand analyzes the datasets. The GISS dataset is here, , and the HadCRUT3 dataset is here. Take the anomalies about a common period (I used 1970-1999), take the differences, and analyze them. Don’t just squint at them across the room and say “looks pretty damn good”, analyze them.

When you do that, you’ll find the following regarding the differences (HadCRUT3 - GISS) in the monthly data;



STATISTICS REGARDING THE DIFFERENCE BETWEEN DATASETS

Average:  -0.04
Std. Dev.:  0.16
Max:  0.65
Min:  -0.65
Jarque-Bera Statistic:  179.82
Skew:  -0.33
Kurtosis:  0.97

STATISTICS REGARDING THE DATASETS THEMSELVES

HadCRUT3 Trend:  0.071°/decade ±0.004
GISS Trend:  0.060°/decade ±0.003
R^2 (coefficient of correlation):  0.84

Since both of these datasets are calculated from the identical raw data, they should be identical. They are far from identical. Among other differences, the months are as much as 0.65° different from each other. This is the total temperature change for the last century, and we see it month to month in the data difference.

Now, we’d expect random differences between the datasets. If these were just random differences, with 1,530 months in each dataset, the differences would follow a normal distribution, and they would average out.

But as the Jarque-Bera statistic shows, the data is wildly non-normal. It is heavily skewed, with large kurtosis. Thus, the differences will not average out.

Finally, their trends are significantly different (p=0.01), and the overall correlation between the two is not impressive (0.84). These are from the same raw data, the correlation should be in the high nineties.

That’s what a scientist means when he says “myriad differences”.

w.