I suspect that everyone who feels strongly enough about CAGW to debate it has the feeling that their own background is a good one to wade into climate science.
That said, I would speculate that a good background is one that gives you some familiarity with modeling; forecasting; and dealing with uncertainty (statistics, industrial engineering, economics).
With the right background, you would have been instantly suspicious of an estimate like this. You would have looked up the details in the IPCC report, and seen that it was highly suspect, to put it nicely.
Perhaps a good background for wading into climate science is experience in having people try to deceive you. e.g. lawyer, policeman, or collections agent.
It HAS bounced around a wide range; it just tends to reach a metastable state, and stay there until something pushes it into a new state. Such as, for example, pumping enough CO2 into the air to overwhelm that negative feedback. Warm up the planet enough, and it’ll likely enter a new, warmer metastable state; a warmer climate that could last indefinitely, and be at least as hard to change as our present climate has been.
The red flag is that one could know such a number down to 100th of a percent.
Then when you actually look at the WGIII report, what do you see? Well, this for starters:
And another caveat:
Not only that, but you will note that the IPCC calculation apparently doesn’t make use of error or uncertainty ranges from it’s (cherry-picked) set of papers. Instead, it looks as though the IPCC just took the study with the highest estimate and used that as a maximum. I’m not an expert on meta-analysis, but that seems very sketchy to me. If you want to combine 5 or 6 studies which estimate a quantity; and each study has its own error range, I don’t see how its kosher to simply ignore the error ranges and construct a new range from the highest estimate study to the lowest estimate study.
Anyway, these are the serious problems that can be seen just from a quick perusal of the IPCC report. Any one of them calls this 0.12 figure into serious question. And I bet if you looked further under the rock, you’d find a lot more worms.
This statement reflects some mathematical confusion on your part. Let me give you an example to help explain. Say that the government proposed raising the fee to enter National Parks and estimated that this would bring in $120 million in revenues. Someone could then challenge this number by saying, “But that is 0.005% of the total government revenues (~$2.5 trillion). How could you possibly know that number down to an accuracy of 1000th of a percent?”
The correct way to look at this is that they have given a number to two significant figures and, since the first digit is a “1”, it is barely to more than one significant figure. It is reasonable not to give it to any more accuracy than that because there are indeed some uncertainties involved.
Yes, there are uncertainties in the calculations…but where do you get this quotation from so that we can see what other parts you might have left out? Looking at the Summary for Policymakers for the WG-III report, I find the following quote:
Note that this says some of the same things as yours does but also includes uncertainties / assumptions that go the other way (which makes me wonder if you “cherry-picked” which uncertainties/assumptions you quoted in the quotations that you gave).
In addition to the ones that obviously go the other way, it is worth noting, as is stated very clearly in the Third Assessment Report, that the “top down modeling” that they employ here generally gets a higher cost number than is obtained by “bottom-up modeling”. Part of the reason for this is that the top down approach generally assumes that you start off at an optimal point where there are no zero- or negative-cost solutions, in contrast to what is found in the real world, e.g., by BP when they made a Kyoto-size cut their emissions and ended up saving money:
(The IPCC report does say “Bottom-up and top-down models have become more similar since the TAR as top-down models have incorporated more technological mitigation options and bottom-up models have incorporated more macroeconomic and market feedbacks as well as adopting barrier analysis into their model structures” so the difference is apparently not as great as it used to be.)
Since you claim that the papers are “cherry-picked”, could you let us know some that you left out?
Again, I know that there are considerable uncertainties and assumptions in economic estimates of this sort. However, as I noted in a previous post, the historical record shows that in the past even the cost estimates of the regulating agency (such as the EPA) has tended to be an overestimate of the actual costs for environmental regulations. Needless to say, the counterestimates of the industries being regulated have tended to be even more out-to-lunch.
Well, the point is that over a long-term, the climate under a significant forcing is expected to be dominated by the magnitude of the forcing and the sensitivity of the response to the forcing. While we don’t know either of these to extreme accuracy, we do know both of them with enough accuracy to make projections, albeit ones with still fairly large error bars. And, of course, there is always the danger in a complex nonlinear system that we could pass a tipping point and have something much more dramatic happen…so that is a sense in which the IPCC projections could actually turn out to be too conservative.
Well, your statement sounded almost word-for-word like how you had summarized the Santer paper in the past. What paper(s) are you referring to now?
Yes, I think you should toughen up. If you are going to viciously attack highly respected scientists on the basis of your interpretation of what they say…and to demand a very high standard with respect to being honest and not exaggerating, then you should expect to be held to that standard yourself, should you not? People rightfully tend to hold such people to high standards…Just look what happened to Elliott Spitzer; a lot of people have noted that someone who had not been so exacting on other people would have been able to weather such a scandal but for Spitzer his whole ethos was built on demanding such high standards from other people.
And, in your case, there is an additional reason to demand a high standard from you and that is because your whole case here rests very strongly on your own credibility because you are saying things that are completely at odds with what is being said by scientific organizations that have a very strong reputation for being very credible.
Well, that’s great but then how do you decide what is a true scenario and what is an untrue scenario? You seem to have decided that your scenario is a true one and yet you have yet to cite any reputable studies to back it up as even a possible scenario, let alone any sort of likely one.
The answer that I would give is NO if they are really trading off honesty in trying to be more effective. However, if you have only 50 words to explain something that actually would take several thousand words at least to explain something, you clearly do have to make some choices in regards to what you say and what you leave out. One should do that in a way that is not dishonest but I hope you can at least understand the pressure that puts one under. Look, how about this: Why don’t you try to summarize your position on climate change in 50 words and then we can debate how effectively you have balanced honesty and effectiveness in making your case?
Where do you get your estimates for what Kyoto has cost? Was Kyoto ever meant to already result in a cut in worldwide emissions or was it meant to start on the path toward stabilizing emissions and toward pushing the market to develop the technologies to reduce emissions? Was this all supposed to happen overnight?
Well, it is an unfortunate fact that the poor bear the brunt of most things…including the likely effects of climate change, which is why most poor nations have supported the Kyoto and larger UNFCCC (United Nations Framework Convention on Climate Change) process. At any rate, one of the reasons why Kyoto did not put any emissions restrictions on the developing countries was in order to try to avoid putting costs on countries that are not in a good position to afford it (and, frankly, generally don’t have the technological advancement to come up with the needed technologies anyway). Yet, many people who oppose action on climate change complain bitterly about this. It seems to me that people who oppose action on climate change will never be happy and are more than happy to embrace both arguments based on how much it will hurt poor people and how it is all a socialist plot to redistribute money from the wealthy nations to the poor ones.
Well, like I pointed out above, could those of you opposing action to mitigate climate change decide whether it is an anti-Robin Hood idea or whether it is a socialist idea foisted on the United States by the poor countries to try to steal our wealth because, frankly, I am getting tired having to argue against both complaints.
No, the onus is on you to explain why you have the right to put stuff into the atmosphere, which you don’t own, that is affecting everyone else and not pay the costs associated with this.
I agree that eventually China and India have to be a part of it…and they accept that too. However, for the reasons you note regarding the poor tending to bear the brunt of any costs of any sort (and, also, out of basic considerations of fairness regarding who has been responsible for most of the increase in CO2 levels in the atmosphere and practicality regarding who has the technological capabilities to best develop the new technologies), I think it is reasonable for the developed countries to go first. (Although I believe Kyoto also has some flexibility whereby the developed countries can earn some credits by supporting emission-reducing projects in developing countries, which is also a good idea.)
I agree with you that the process is not an easy one. This is certainly one of the largest collective challenges that human society has faced and it is a large problem that cannot be solved all at once. It will be an on-going process. However, getting off fossil fuels is something that will have to be done eventually anyway, as their supplies are limited. So, the real question is do we start the process now so that it can be done gradually and soon enough that we do not dump the entire store of carbon built up over hundreds of millions of years into the atmosphere within a few centuries or do we just sit back and wait, only going through the process after we have done what may be considerable irreversible damage to our environment, both through altering the climate and acidifying the oceans (and, frankly, perhaps causing other problems that we haven’t even anticipated)?
Oh really? Well let’s extend your example a bit. Suppose somebody told you that raising the fee to enter national parks would increase government revenue by 0.0048% over the next 50 years. Would you be impressed by that estimate? Of course not. Because factoring in future government revenue adds in a whole new area of uncertainty.
The fact is that we don’t know what government revenue will be 5 years from now, let alone 10 years from now. Not down to the nearest hundred million dollars, anyway.
Lol. It’s right there in WG III, Chapter 3.
Ahhhh . . . I just thought of another education background that would be useful in the global warming controversy: Accounting. When you read the IPCC report, it’s like studying the financial statements of a company that might be the next Enron. In other words, you can’t just look at the summaries and read what the company wants you to read. You have to study the footnotes too.
And if there are assumptions that “go the other way” as you say, it only adds to the uncertainty, making the 0.12 percent figure even less credible. Because errors don’t necessarily cancel out.
It says right in the report that they selected papers on a case by case basis.
But, that is irrelevant to knowing approximately how much difference this particular cost will make. In your example, I don’t see anything wrong with saying that it will increase government revenue by that amount. Now, if you were to use that to conclude that government revenue would be exactly $5.025743 trillion dollars in 50 years, then yes, I would say that this is too many significant figures to carry. However, it doesn’t stop us from having a reasonable estimate of the difference in revenues between what would be obtained with the current fees and with the new fees.
The point is that the effects would go in the opposite direction. I’m not saying they exactly cancel…but they should at least partially do so. And, yes, the 0.12 percent figure is an estimate…It is not an exact number.
You claimed that they “cherry-picked” the papers, implying that they left some out and, in particular, I assume your implication is that they left out ones with higher estimates. However, here is the full quotation from Chapter 3 of the WG-III report where they use the phrase “case-by-case basis”:
So, as I understand it, what they are saying is that for each of the different scenarios (i.e., levels of stabilization), they looked to see which one satisfied the criteria of presenting a comprehensive mitigation analysis. So, it is not saying that they “cherry-picked” them; the “case-by_case” part just refers to the fact that for each scenario, they looked to see which of the studies did a comprehensive mitigation analysis for that particular scenario and included the result of the study for that scenario if it did so. Then, to produce the figure, they plotted both the gray band representing the 10-90% results and then also plotted a few representative results that spanned the range from high- to low-estimates.
Of course it’s relevant. If you are describing cost as a percentage of GDP, then any uncertainty – whether in the numerator or denominator – will be transmitted to your final figure.
I don’t know how to make this any clearer.
So what? It won’t cancel out the uncertainty. Even partially.
Sure, and the IPCC has put lipstick on the pig by offering a number that seems like a precise estimate – but essentially amounts to a wild-ass guess.
That’s right, and I admit that my assumption is based on the IPCC’s lack of credibility combined with the fact that they gave themselves discretion to pick and choose.
And I’m pretty confident that if I had the time, I could find papers that give bigger estimates which the IPCC left out. Just like they left out Lindzen’s paper on sensitivity estimates.
However, for the sake of argument let’s assume that the IPCC didn’t cherry pick. It still doesn’t change my conclusion that the 0.12 estimate is highly suspect – to put things nicely.
You can’t make it clearer because you don’t know what you are talking about. Say that the GDP is only known to 10%, then this would introduce a 10% uncertainty in the denominator so that the 0.12% would really be between 0.108% and 0.132%.
Look, the point is that you are simply wrong to be saying, “The red flag is that one could know such a number down to 100th of a percent.” The fact is that the IPCC gave a figure to two significant figures…and barely that since the first digit is a 1. You may not understand this, but I trust that the other people reading this thread understand.
They never claimed it was a precise value…but neither is it a “wild-ass guess”.
The IPCC enjoys great credibility with most of the nations of the world, including the U.S. government, and the scientific community in the world. The fact that it doesn’t have much credibility with you perhaps says more about you than them.
They didn’t leave out Lindzen’s paper; they discussed it. And, what they focussed on were papers that actually with some attempt at rigor derived a probability distribution for the sensitivity (e.g., using Bayesian methods). Lindzen’s paper, in addition to its many other faults, didn’t do that…but just made some vague conclusions that results seemed to fit best with a sensitivity under 1 C.
Well, the “less than 0.12% of GDP growth per year” is the best estimate that we have. And, as I have noted before, the history of these estimates for the costs of environmental regulation is that they have in the past been too high because they have failed to take into account the market’s ability to find the least-cost solution (and have failed to account for the non-optimality of the initial state and so forth). These economic models are apparently improving, so maybe their overestimation of the costs is not so much of an issue anymore, I don’t know.
Right. As I’ve said repeatedly, uncertainty about GDP is transmitted to uncertainty about the estimate. This is very basic mathematics here.
That’s why the figure is suspect on its face. We can’t know next year’s GDP growth to one hundredth of a percent. So how can we know the effect of mitigation on GDP to one hundredth of a percent?
Now, I realize that there is an explanation for this – that the 50-year estimate is something like 5.5 percent of total GDP, and that equates to .12 percent per year different. But that only implies another question: How can we know that the cost of that level of CO2 mitigation, in 50 years, will be 5.5% of GDP in 50 years? The 5.5% figure is ridiculous on its face. There’s no way we could know the figure that precisely.
One would hope that the study in question gave a range for its estimate, but the IPCC simply ignored that range, if it ever existed. A point that you haven’t even bothered to deny. And this is an organization that has great credibility? Give me a break.
Of course not. They simply implied it, without actually claiming it. A classic alarmist ploy.
It says something about everyone, actually.
Whatever. They omitted it from their chart. As I recall, they never explained what the inclusion criteria were. Your explanation is just speculation.
So what? If it’s a lousy estimate, it’s a lousy estimate. I bet if you took the study its based on, and did a simple range sensitivity analsys, you’d end up with a huge honking range.
I don’t have a dog in this fight, I’m not taking sides on the underlying question.
But two significant figures is two significant figures, whether the first digit is a 1 or a 9 … so I fear I don’t understand this. What difference does it make what the amount of the significant digit is, whether it is 1, 9, or 5. Is a 9 somehow more significant than a 1?
tim314 (tim pi?), you finished an interesting post up above with:
You are correct to focus on the basics. The greenhouse effect exists. No question. CO2 changes cause a change in radiative “greenhouse” forcing. No question.
How much warming will result from that forcing? Unknown at this time. This number is known as the “climate sensitivity”. It is measured either as the amount of warming from the change in forcing from a doubling of CO2 (°C/doubling), or alternatively in °C per 1 W/m2. I prefer the latter measurement, because the change in forcing from a doubling of CO2 has a range of definitions from about 3.6 to 4.1 W/m2, making it uncertain.
Scientific estimates of climate sensitivity range from near zero to around 2°C per W/m2. The IPCC considers the range to be 0.4 to 1.2, with a most probable value around 0.8. If it is 2, then there will be climate catastrophe, a 7.4°C swing.
My own estimate of the climate sensitivity is about zero. Let me try to explain why.
Most people have an image of the climate as being in some kind of delicate and precise balance. In this view, a slight adjustment of one of the variables (say CO2 or solar input) causes a corresponding change in equilibrium temperature.
In fact, the climate is a huge natural heat engine. The heat comes in at the tropics. Some of that heat is transported by the ocean and the atmosphere to the cold end of the heat engine, doing work along the way. How much heat is transported?
Suppose we take a one million watt (one megawatt) power plant for comparison. It will run a small town. So let’s scale up, a one thousand megawatt plant. Eight of these giants will run New York City. Suppose we start placing them on the equator, to represent the heat carried away from the equator by natural circulation. How many of these huge thousand megawatt power plants will it take to equal the heat circulated in the climate heat engine?
The answer is, we’d have to place one of these giant power plants every twenty feet (six metres) all the way around the world to equal the natural heat circulation. It would take over seven million of these to generate the same amount of energy that is constantly being circulated polewards by the ocean and the atmosphere.
Flow systems such as the climate are governed by the Constructal Law (see Bejan . This states that the climate system self-organizes in such a manner as to maximize power created and dissipated in the heat engine. Bejan has shown that we can derive the size and flow of the Hadley cell circulation from the Constructal Law.
This means that the earth has an equilibrium state, which is that state where the most power is dissipated in the circulation of the ocean and the atmosphere from the tropics poleward and back again. It is not at some hypothetical delicate balance, easily moved by a small forcing. It is at a constantly maintained equilibrium state of maximum power dissipation, in a huge, colossal heat engine moving unimaginable amounts of energy.
The parameters of this equilibrium climate state are set by the locations of the continents and the unchanging physics of wind and water — the rate of cloud formation, the amount of energy transported at a certain speed compared to the turbulent power dissipation at that speed, the relative weights of air (28.8) and water vapor (18), the Clausius-Clapeyron evaporation equation, and the like.
As observational evidence for this equilibrium state, consider that over the last half billion years, the strength of the sun has increased by about 17 W/m2. If the IPCC central forcing estimate is correct and the sensitivity is about 1°/W-m2, the earth should have warmed by 17°C … but that didn’t happen.
In fact, the Earth has seen an ever-warming sun, a number of asteroid strikes, massive volcanism, shifting continents, and a host of other huge changes … but the climate has merrily gone along at about 290°K ±2% for half a billion years, and ±1% for much of that time.
That’s why I say that the climate sensitivity is effectively zero. Not because there is no change in forcing. But because the equilibrium state of the planet is determined by maximum power production/dissipation, not by the forcings. If the climate didn’t change from a huge change in the solar “constant” over the millennia, why should a much smaller change in CO2 forcing cause a large perturbation?
Further information on this concept is available here, peer reviewed so you don’t think this is just my idea.
intention, your most recent post contains some very interesting food for thought. Indeed, there are some interesting questions about how the earth’s climate has been regulated within a certain general range throughout its entire history. Unfortunately, I worry that your hypothesis may in some sense explain too much…and, in particular, that it may explain periods far back in time for which the actual conditions (in terms of climate, atmospheric composition, continental and ocean geology) are poorly constrained by the more limited experimental evidence at the expense of our understanding of more recent periods for which we have considerably more detailed information.
In particular, I was wondering how this theory is compatible with the fairly dramatic swings in climate that we have seen, e.g., during the current period of glacial – interglacial oscillations over the last couple million years (with the most recent swing being only about 10,000 years ago or so) in response to what seem to be fairly small changes in forcing. Secondly, I will point out that if you look at this Wikipedia page, they discuss the current thinking on the faint-young-sun paradox. Indeed, they do propose that there are some negative feedbacks to account for this; unfortunately, however, the evidence suggests that the negative feedbacks involve greenhouse gases such as CO2 and methane and operate on considerably longer geologic timescales than the one that is of interest to us with the current accumulation of CO2 in the atmosphere. Here is an extensive quote from that page about this:
Another question that I had is whether the hypothesis that you present is compatible with the known effects of greenhouse gases in warming the planet. I.e., it is a simple radiation calculation to determine that the earth’s surface is some 33 C warmer than it would be with its current albedo in the absence of greenhouse gases. So, it is clear that greenhouse gases have a significant effect on climate, although it is admittedly unclear to me what numerical constraints this would actually put on climate sensitivity.
So, in summary, as we currently understand more recent paleoclimate events, it seems that the earth’s climate is actually quite sensitive to changes in forcing. When we look on the much larger timescales that you noted, there is evidence that negative feedbacks seem to come into play; however, the experimental evidence during these time periods is considerably more limited and, from what we do know, it seems that the negative feedbacks are likely to involve greenhouse gases in a way that only operates on timescales that are much longer than the current timescale of interest.
jshore, your comments as always are interesting and provocative.
I am aware of the alternative GHG hypotheses for the solution of the “Faint Early Sun” paradox, although they are given somewhat short shrift in Wikipedia. However, if we are to ascribe the additional heating to CO2/methane, several conditions have to be met:
The sun is thought to have warmed some 30% in the last three billion years or so. This is a change of about 80 w/m2. To counterbalance this, CO2 would have to double, not increase but double, about 80/3.7 = 21 times. Minor problem - 21 doublings of the current 380 ppmv yields an atmosphere with more than 100% CO2. This means that methane must have been involved as well … but then we have to assume two greenhouse gases in delicate balance with temperature, not just one.
Since the sun has warmed gradually over this period, in order for this plan to work, the CO2/methane have to very gradually decrease, from a proposed high some three billion years ago, to the present much lower state. There is no evidence that this has happened.
In order for this plan to work, as the temperature slowly increases, the CO2/methane levels would have to decrease. I know of no evidence, either observational or theoretical, which shows a long-term negative correlation between temperature changes and GHG changes.
In order for this plan to work, the CO2/methane could not decrease linearly. Since the forcing is logarithmic, this would require a very exact exponential decay of the GHGs to stay in line with the increasing forcing.
In the words of the immortal IPCC, I find it “highly unlikely” (<5% chance) that all of those conditions have obtained over the entire period of the last three billion years.
This question, of whether the world has an equilibrium temperature, is to my mind the great unanswered question in the field. Unfortunately, it cannot be answered with GCMs, at least in their present form. To do that, we need models like Wu’s model in my citation above.
w.
PS - I just noticed that the IPCC’s list of terms, viz:
doesn’t contain any term for a couple of the slots. There’s no term “Less likely than not <50%” … hmmm. And there’s no term for “Really, really unlikely < 1%”.
Well, the rule on using significant figures is sort of fuzzy, but as I understand it, it argues roughly that the number of significant figures you used should be determined by the fact that the last one you quote might be wrong…but the next digit is very likely correct…or at least wrong by at most one digit (e.g., so if you say “0.19”, the correct result could be “0.21”, say). Let’s just arbitrarily simplify this by saying that we imagine the last digit quoted can be off by at most 3. Then, “0.12” corresponds to 0.12 ± 0.03 and so it is implying knowledge to ±25% variation in the value. However, “0.92” corresponds to 0.92 ± 0.03 and thus implies knowledge to ±3% variation in the value.
That’s all I meant…namely the observation that if you have a number quoted to 2 digits and the leftmost digit is a 1, then being off by one in the right digit is an error on the order of ten percent ,whereas if you have a 2-digit number and the leftmost digit is a 9 then being off by 1 in the right digit is only an error of order one percent.