Over a very long timescale (tens of thousands of years) I’d agree it’s impossible to predict if AGW is beneficial, detrimental or neutral. Over short timescales (centuries) it’s very likely to be a serious risk to us. The population of the planet is rapidy increasing, a large percentage of our cities are in coastal areas, and water resources are under stress in many countries. It’s a combination of these factors that that make climate change (natural or not) such a threat. A change which may be neutral overall (in terms of how much biomass the planet can support) can be a big problem for us, as our resources are already stretched.
As a starting point, it would be a very sensible bit of hedging to introduce carbon reducing measures which have minimal economic impact (things like mandating use of energy saving lightbulbs, and sliding scale charges for electricity usage, the more you use the more you pay for each kilowatt/hour).
You are right that there is a lot of uncertainty in the climate models, but doesn’t mean they are worthless. A number of positive feedback mechanisms have been identified, where increased greenhouse gas levels lead to further warming (e.g. loss of ice -> less sunlight reflected into space). More investigation is required, but negative feedback mechanisms sufficient to cancel these effects out have not been found. It’s not just a case of tweaking the values.
Extrapolation is fundamentally less accurate than interpolating data, but that doesn’t make it impossible to identify a trend. The further we extrapolate the models from known the greater the uncertainty, but again that doesn’t make the process worthless.
This is why the models are run with a range of parameters. Even the more conservative values lead to predictions of significant warming. If you can model the most salient features of a system you can use it to make useful predictions, even if it’s not as useful as one which is more detailed. By your argument, we’d never be able to build a useful model, as there would always be something omitted.
The best information we have at the moment is that AGW is significant. The evidence is strong although not overwhelming. If the probability is high that it’s going to happen shouldn’t we be planning for it now?
Okay…So, I assume that you also want to do away with the patent system? Why the hell should one person/company have a monopoly for a certain amount of time? If they want to keep ahead of the rest, there is nothing stopping them from innovating further!
Everything is a balance. And, in science one needs to find the balance in intellectual property protection and open access to information. From what I can tell (from the general lack of push to change things), most scientists seem to think that the balance is about at the right place. However, maybe you think the balance is badly set at the moment, but given that it is the prevailing situation throughout the sciences, it seems it is up to you to make the case for a change from the current scientific conventions.
What is intellectually dishonest is to pretend that the general philosophy within science is that you have to access to a person’s computer code in order to perform the sort of replication that science relies upon…and that thus it is just this one field of science that is going against a general practice of scientists regularly making their code available to all. The NSF has made it abundantly clear where they come down on this issue for the science that they fund.
intention, I have been thinking about this comment since you posted it yesterday. I find it interesting that you are invoking the example of the tobacco companies since, in fact, at least a few of the most prominent contrarians on climate change played a similar role in the argument about the dangers of tobacco (see and [url=http://sourcewatch.org/index.php?title=Steven_J._Milloy]here, for example).
And, in fact, the tactics used are very similar (see, e.g., here for a discussion)…in particular, the attempt to create the impression that science usually operates in an arena of certainty but in this case there are lots of uncertainties. It is thus a two-pronged approach: One is to miscommunicate how science usually operates. (A good book that explains the issue of science and uncertainty is here.) The second is to cherry-pick and magnify the uncertainties that do exist to create the impression that they are so overwhelming that we really can’t conclude anything about the issue…and ignore the larger context into which this all fits.
Of course, the best antidote to this is to have scientific organizations such as the National Academy of Sciences, AAAS, and the IPCC weigh in with their summary and opinion of the state of the science…which is why the contrarians spend so much time trying to discredit or simply ignore what these organizations are saying.
You seem to be saying that there’s something inherently wrong with one scientist using another scientist’s work to benefit himself or herself.
Absolutely not. Actually, patents are a useful analogy because one of the fundamental principles underlying the patent system is that we want to encourage people to disclose their inventions. We give people a limited period of exclusivity in exchange for disclosing their invention.
Indeed, there’s only one form of intellectual property that rests on nondisclosure, and that is trade secrets. In my opinion, “trade secrets” are fundamentally incompatible with science.
Actually there does seem to be a push to change things. This is from a few posts back:
Fine, which is what I and others are doing.
I don’t see why. Just because you disagree with an argument doesn’t make it intellectually dishonest.
Personally, I believe that if the general philosophy of the scientific method is applied to the question of source code disclosure, there should be disclosure.
Nothing dishonest about that.
You apparently support the status quo. In your view, the NSF is the ultimate arbiter of the question of disclosure. It is infallible like the Pope and its pronouncements reflect how the general philosophy of science should apply to individual specfic issues. Fine, you are entitled to your opinion.
was way too broad and what you are saying now is still too broad. Building on other scientists work is what science is all about. However, simply taking their code that they have worked long and hard on and doing some sort of trivial extension with it or application to another problem does result in the unfair situation in which the person who has invested a lot of time in something doesn’t get to enjoy very much of the benefits whereas someone who has invested just a little bit more gets a lot of benefits.
Just like we want to use the patent system to encourage people to disclose their inventions, I think we want to use the scientific publishing system to encourage people to publish their results and to do so promptly. If they knew that when they published their work, they immediately had to release their whole computer code, this would be quite discouraging…and would not really do much to further the cause of science anyway since it is usually better served by having people perform the replications as independently as possible rather than having them just take the other person’s code and pore through it and critique it (or, perhaps worse yet, use it warts and all and then not really even do an independent test).
Well, what I object to is the implication made that a certain approach is unscientific or incompatible with scientific principles when it is in fact the norm in many fields of science. If you were up-front in stating, “Yes…the way science is practiced in most fields around the world now is not the way I think it should be,” then that is fine. But, when you try to imply that by not releasing their code, these scientists are going against some norm or that replication in the physical sciences has traditionally meant disclosing information down to the code itself that would enable others to exactly replicate every single step in the calculation exactly as they did it, then I don’t think you are being intellectually honest.
And, while I don’t think that the opinions of the NSF are infallible, I do think they are more likely to be based on a sound understanding of science, scientific principles, and how to balance various competing interests within the scientific process than the opinions of a random person on a messageboard.
jshore, as usual, a nuanced and interesting reply. While in most instances I would agree with you that the NAS or someone equivalent would be useful, in climate science it is not. This is because (in no particular order):
For many people, including scientists, the issues revolve around belief rather than evidence.
There have been a lot of bogus studies and irreproducible results that have been passed off as valid climate science and believed by scientists and laypeople alike. The most famous, of course, is the Hockeystick. Data is hidden, untested statistical methods are used, code is concealed, and the science is fudged at levels unseen in any other scientific discipline.
Passions are running high. The term “deniers” has been coined by the AGW crowd to invoke the Holocaust, threats against scientists have been made, and jobs have been lost. For many, including scientists, it has become a religious crusade rather than a scientific question.
The peer review system works very poorly when there is a division of scientific opinion into two camps. The camp in charge of any given journal chooses reviewers from their own camp, with predictable results.
Observational data is woefully deficient. Temperature datasets are short and filled with problems (inaccuracies, scribal variations, station location changes, instrumentation changes, UHI, microsite changes, gaps in the records, etc.). 3D information for the oceans and the atmosphere is very scarce. Satellite records are very short, and many are under dispute. Measurements of rainfall, relative humidity, and other important variables are spotty and usually inconsistent over time.
Some scientists have become nothing but cheerleaders for one side or the other.
Climate science is extremely broad, including aerodynamics, chemistry, geology, oceanography, hydrography, and a host of other specialities. This makes it difficult for any one scientist to understand the other issues that may affect their own specialty.
Climate is incredibly complex. I have said it before, but it bears repeating. The climate system is a chaotic, multi-stable, driven, constructal, optimally turbulent, tera-watt scale, planetary sized heat engine. It contains a host of forcings, feedbacks, resonances, and semi-stable states, both known and unknown. It is composed of five major subsystems (ocean, atmosphere, lithosphere, cryosphere, and biosphere). Each subsystem has its own forcings, feedbacks, resonances, and semi-stable states, and each subsystem both forces and feeds back to and from the other four subsystems. None of the subsystems is understood in adequate detail. All of them contain important processes at all scales from molecular to planetary.
Climate is the most complex system we have ever tried to model, and we’ve only been working on it for a few decades. While they get better every year, our models are still very poor, and are unable to model a wide range of features of the real world.
Feedbacks are crucial to semi-stable systems such as the climate. For example, a tiny cloud feedback which creates a 1% change will have the same effect as a doubling of CO2. Many feedbacks are either very poorly understood, or are unknown. Interactions between the feedback are an area which is nearly a complete mystery. And adding to the problems of lack of knowledge and understanding, we are trying to model a massively parallel system with sequential computers. This leads to unavoidable and poorly understood problems with things like modeling feedbacks . It is not generally appreciated that the calculation of feedback is dependent on the order of applying the feedback - if we calculate first the water vapor feedback and then the ice albedo feedback, we get a different answer than if we calculate them the other way around. In the real world, of course, they both happen at once, and affect each other, which gives yet a different answer. An example is the recent NASA study that showed that, unlike what the models had all agreed would happen, reducing albedo from decreasing snow and ice in the Arctic was almost completely counterbalanced by increasing cloud albedo. None of the climate models forecast this, nor could they. Because they are not based on physical first principles, they can only forecast interactions that we know about and have programmed them to forecast.
Because of computer limitations, we are very limited in the resolution of our models. Climate phenomena, on the other hand, happen at all scales from the microscopic to planet wide. This leads us to have to use approximations and parameterizations on an extensive scale. While the models contain some physical first principles, they are pasted together with a pastiche of parameterizations and approximations because either the scale dictates it, or because we simply don’t understand the physical processes we are trying to model. Clouds, which are arguably the most important climate phenomenon, are doubly parameterized, for both of these reasons. In addition, the unphysical viscosity currently used to keep the models from going off the rails means that increasing the models resolution decreases the accuracy.
Ask any coastal processes engineer, and they will tell you that turbulence is one of the most intractable things to model. Unfortunately, climate is a constructal system, which re-arranges itself constantly to maximize turbulence. Both the turbulence, and the constructal processes which govern it on all scales from local to planetwide, are poorly understood and nearly impossible to model.
The models have not been subjected to V&V and SQR, so we have no idea if they contain internal errors (either theoretical or implementation errors).
While we have a Theory of Evolution, and a Theory of Relativity, and String Theory, and Quantum Chromodynamics Theory, and a host of other scientific theories that explain natural phenomena, we have no Theory of Climate. This leaves us without the ability to make very many falsifiable statements, which is the essence of science.
So … that’s why I don’t trust the scientific bodies. If the scientists in these scientific bodies were being honest, they would simply say “we don’t know” in answer to many of the questions which they are posed. Instead, they rely on their beliefs, their fears, and on the flawed models, which represent the best guesses of the programmers but are presented as being far more reliable and accurate than they actually are.
In addition, the scientific bodies are increasingly politicized, as evidenced by the Royal Society’s attempt to dictate the allocation of funding for climate research. This repugnant attempt is totally contrary to the nature of science, where anyone is free to either investigate or fund the investigation of any phenomenon they choose. And the IPCC, which is supposed to be scientific in nature, has a policy that the scientists and politicians get together to write the “Summary for Policymakers”, and then they change the conclusions in the scientific sections to agree with the summary. Disgraceful.
My solution to this has been simple – learn about it myself. I go to the original data and do the analyses, to see whether the claims are true or note. I read the studies and see whether they are being open with their data and their methods. I ask the hard questions, I investigate apparent inconsistencies. I compare the results of the models with the observational data. I don’t accept anyone’s claim without looking hard and long at the basis for their claim.
And sadly, far too many times I have found that the claims are baseless, or the data is hidden, the code is concealed, the assumptions are invalid, there are no ex ante rules for data selection, the error bars are minimized or absent, the model results are treated as reality, the simplification are unjustified, the statistical methods are inadequate, the effects of autocorrelation are ignored, or a host of other errors, tunnel visions, or outright chicanery.
In short, the current state of climate science is a shame and a disgrace, and there are very few climate scientists brave enough to say that the Emperor has no clothes when his Royal Supreme National Association of Clothing Scientists are loudly claiming that he is dressed in only the finest robes, and that anyone who can’t see the robes is a denier and a heretic who is unworthy of funding …
If it’s just a trivial extension, then who would ever publish it?
You implicitly assume that there is a finite pie of “benefits”; that if another scientist benefits from the first one’s work, the first one is necessarily harmed. That’s a silly assumption in my opinion.
Anyway, from what I hear about Hansen’s source code, I gather it’s too much like a bowl of spaghetti for anyone to make serious use of it.
I disagree. We want to encourage some people to publish but not others. If somebody is finagling, fudging, cooking the books, doing sloppy science, or whatever, we don’t want to encourage them to publish their results.
You are the one who is claiming that in physical sciences, it is standard not to release source code. I have made no claim on this point.
You are entitled to your opinion. However, stating repeatedly that the NSF disagrees with me is not going to convince me of anything, particularly since you haven’t posted any NSF reasoning or analysis that addresses the points I have been discussing.
By the way, do you agree with me about the reasons for disclosing failures?
That’s not what the study says. It mearly suggests that an increase in cloud cover might cancel out the effects of the ice loss. It doesn’t say this will actually happen.
So you say, but here in this thread I have seen that the biggest success from your side is an attempt to discredit the coding. (And that is really it)
Unfortunately even that is not a 100% victory, as I said before, finding a flaw in the quality of the coding does not means the program does not work.
Every big pompous so sure of himself climate scientist that claims to be 100% sure of something does not deserve to be taken seriously, (that goes for the monsoon modelers) just as I do not take seriously the so sure of themselves deniers.
One has to point out here though that for example, in a previous post, the climate scientist said “most” models, but you are happy to dishonestly imply that he said ‘all of them’.
And what is that bit of “heretic who is unworthy of funding”? The missions NASA has now are based precisely to check if the best criticisms the deniers have come with are valid. The information that will come will support one case better than the other.
IANA climate scientist so that does not apply, it is ironic only if you ignore probability. And it is specially you that has to recognize that while we can discuss until kingdom come if the evidence is reliable or not, the fact is that there is evidence.
I can indeed say that I’m not 100% sure, and yet, since the subject of this thread is “if we take Global Warming of faith”, it is clear that no; it is not by faith, because there is evidence. You lose this one. If you want to discuss the validity of the evidence a new thread is needed IMO.
Now, how bad it will Global Warming affect humanity in the future? That is what I do not know. So we continue testing.
The reality remains that it is physics that is the basis of many climate models. Checking if the models predicts past weather is a test that the university of Oxford did on the way to predict what will take place:
I see the efforts so far here as concentrating too much in one facet of AGW or GW for that matter. Computer models are a tool, and they are not perfect, that is why they are constantly checked with actual readings.
As it should be painfully aware by now the usefulness of a program does not depend on the quality of how it was written.
So far I see some deniers like a “six sigma” guy that comes into a shop that already does the job well and he congratulate himself because the quality was improved by 5%. (this is not bad) now, what does that have to do with the product? Very little, it just made the system to be more effective in doing the product.
This is how I see the efforts to discredit models so far, (once again an effort that is not not bad for quality) just not to the point of discrediting GW or AGW.
GIGObuster, this is my last post to you. I’m getting tired of your vague, generalized claims and accusations like:
Say what? What climate science? What models? Where did I say “all of them”?
and
What are you talking about? What flaw? What coding?
If at some time you want to discuss the science, with proper links, citations, and specific examples, I’m happy to do so. Read jshore’s posts for some good examples. Until then, I’m not willing to deal with your vague, unclear, faith-based claims.
Read it again, i’m not saying you said that, only that you implied it, you can expect that and more when suddenly you go from pointing out researchers that say “some” models (a moderated and humble position) to end up with declaring that “the Emperor has no clothes”
That is what I’m expecting from the critics that got the coding from Hansen, and because other countries have similar results with other models, I have to deduce the flaws will not affect the results too much, in part because it is only one model.
As I see that you skipped a couple of links here, I think you are the one that is relying more on faith, when you launch a whopper like “we have no Theory of Climate” it is really no wonder that you have to hope and pray that no readers will notice.
Yes, and I have been involved in Electronic testing, several times I do remember seeing horrid code at the testing terminals. And yes, it was infuriating to see that those programs still did the job well.