How will global warming play out?

jshore, first and foremost, have a very wonderful traveling holiday, and the best of the season.

Second, yes, I had looked at that citation of yours previously. Both you and Mann seem to be confused about the distinction between replication and similar studies. In the citation you provided, he only lists one replication of his results (Wahl and Amman). The rest are all various similar studies, which are meaningless as replications.

In the very post you are replying to, I have already explained, with several citations and a list of 21 specific unreplicated items, why Mann’s claim about Wahl and Amman is false. Even after the Corrigendum admitting his prior withholding of data, there still wasn’t enough information to duplicate his results, and Wahl and Amman were unsuccessful.

So, Wahl and Amman is just more Mann persiflage … and in any case, your claim was that “other groups” replicated Mann’s work, so … name me one.

w.

PS - in the letter you cite, Mann makes the outrageous claim that “Whether I make available my computer programs is irrelevant to whether our results can be reproduced.” See my post above, and the links therein, to see why this is nonsense. Several groups tried to replicate his work without his code and failed. His code contained a variety of undocumented methods and errors which actively prevented replication based on his description of his methods.

Use a safe Credit Card, be a little wary - and enjoy yourself enormously

Hi, intention, and I hope you had a nice holiday! I actually spent the plane ride down reading the Santer paper all the way through, including the supporting online material, so I am much better able to comment on it now. Here are the points that I would make:

(1) In the supporting online material they say that they used version 5.1 for the UAH data set. (For the RSS data set they used version 1.3.) So, indeed, with the trend in the tropics going from 0 to ~0.06 C per decade, this will be pushing the UAH data into better agreement with the model results although certainly not all the way there yet…maybe about 35-40% of the way there (judging from Fig 4).

(2) I can now say with more confidence than I did before that if you think you should be looking at Figures 1 and 2 of the paper to get the main point, then you indeed misunderstood the paper. The whole point is that although the models do predict a wide range of warmings over this 20 year period, there is a quantity that all the models and theory agree on quite closely and that is what is plotted in Figure 4 (and in a different way in Fig. 3), i.e., the amplification of the temperature fluctuations / trends as you go up in the troposphere. Furthermore, even all the satellite and radiosonde data are in good agreement with this amplification factor if you look at fluctuations on shorter (e.g., monthly) timescales. It is only when you look at trends on decadal timescales that 3 of the 4 data sets diverge significantly from the results predicted by models and theory (with the RSS satellite data set being the one that continues to agree…or at least disagrees only modestly). So, it is on the basis of these figures that Santer et al. argue that they believe the models are right and 3 out of the 4 data sets wrong (over decadal timescales). They note that there is no understood physical reason why the amplification should hold on monthly timescales and then fail to hold over these longer timescales. On the other hand, there are lots of quite good reasons to believe that the fluctuations in the data are reliable over the shorter timescales but that the trends over the longer decadal timescales are not.

(3) They note in the supporting online materials that there is another satellite analysis of T2 from U of Maryland that shows even stronger warming than the RSS analysis (and thus one might imagine would be in closer agreement with the model results). However, this dataset is available only as a global mean and thus they could not use it to derive tropical temperature changes.

(4) Just to reiterate, it is Figures 3 and 4 that Santer use to draw their conclusions. I.e., their conclusions are not based on comparing the amount of warming seen in the various data sets with those predicted by the various models. Rather, they are based on comparing this amplification factor seen in the various data sets with those predicted by the various models (and to make this comparison over shorter timescales…where everything agrees…and longer timescales…where 3 of the 4 data sets diverge significantly). Note that the amplification factor is a quantity that all the models agree on quite closely even when their predictions for the amount of warming over this 20 year period vary significantly.

(5) I suppose that you might still be bothered by the fact that the model predictions for warming over this 20 year period shown in Figure 2 vary so significantly across the models. As this is a tangent to the main focus of the paper, they do not really address this. However, I imagine it is because, as you have noted previously, 20 years is a short period over which to get a very accurate trend. I.e., it is sensitive to the fluctuations that are seen, e.g., due to the ENSO (el Nino, la Nina) oscillation and such. Of course, the models are not able to predict these jiggles accurately as they will vary even from one run to the next when the initial conditions are perturbed. In that context, it is not surprising that the data sets are in closer agreement with one another than the models since the data sets are all looking at the same “run” of the atmosphere, e.g., one where there was a large El Nino in 1998, whereas the models are not. Note also that the models varied quite a bit in terms of what forcings they included, as is shown in the supporting online material. A few of the models didn’t even include things like volcanic aerosols or sulfate aerosol indirect effects. This presumably added some additional spread amongst their results over this 20 year period.

This statement makes me wonder if part of the problem here is a difference in definition of what “replication” means. For example, I (and many, many other people) have published papers based on Monte Carlo simulations, which means that we used random number generators. I have never published (nor have I ever seen published) enough details of these simulations that they could be reproduced exactly. To do this, one would have to say what random number generator(s) you used, what seed you gave it, and probably even details regarding the bit allocation on the machine that you ran it on. One would also have to give much more detail about the algorithm…almost line-by-line description of the code in order to insure that the random numbers were used in exactly the same sequence. If you want this to be the new standard for publishing scientific papers then what you are proposing would certainly be a revolution in physics…and I imagine in many other fields. Hell, when I referee a paper, I am usually happy enough if the algorithm is explained in enough detail to at least allow me to understand what the authors did at a basic level (and judging by what I see published or even what I see fellow referees of some of these manuscripts say, I would say I am somewhat of a stickler in this regard compared to most). [In experimental work, it would be essentially impossible to control and give all the necessary variables to allow replication in this very strict sense.]

Glad you had a good holiday, jshore, and welcome back. I had said:

You replied:

Since Mann did not use random number generators, and since he did not describe his methods in enough detail to be replicated by any one of several groups who tried to replicate his work, and since he made errors in his mathematics which were not explainable without his code, I fail to see what your post has to do with Mann’s work.

There is only one definition of replication. Replication means that somebody is able to take the same data, apply your methods, and get the same results. Depending on the nature of what has been done, this may be somewhat trivial, or may require detailed explanations, supplementary material, or computer code. But whether the methods are trivial or complex, the point is simple — can somebody else get the same results?

With Monte Carlo experiments, this generally means specifying the nature and details of the random process (white noise, red noise, random walk, ARMA(1,1) process, etc.) used. By their very nature, Monte Carlo results should be replicable without duplicating the random number seed, generator, or bit allocation. But even with Monte Carlo methods, you must give enough information that your Monte Carlo test can be run by somebody in Argentina and will give the same results that you got. You can’t just say “I did a Monte Carlo test” and leave it at that, or people will not be able to duplicate your results.

While your experiences are interesting, I fear you have not addressed the central issue. Nobody could replicate Mann’s results, and he refused to reveal his methods in sufficient detail to allow that to happen.

w.

Let me see if I follow the gist of your argument. The model results disagree with the data in Figure 1 and Figure 2, but we should ignore that because the record is too short for good results (even though it is the same length in all the figures).

The model results disagree with both the data and the theory in Figure 3A, with data and theory being the outliers, in which case the models are right and data and theory are wrong. In Figure 3B, the models disagree with the theory as in 3A, and in addition, theory and models are quite different. In this case the models are right, the theory is kind of right, and the data is wrong.

The models agree with the data in 4A and 4B, in which case the models and the data are right. Finally, the models agree with one of four datasets in 4C and 4D, in which case the one dataset is right and the others, which are all quite close to each other, are wrong.

Am I correctly stating your claims?

Additionally, from my own research I can tell you that the models do a horrible job at reproducing the surface temperature variations. Take a look at Figure 1 for confirmation. In general, their excursions are larger, faster, and longer than those of the real world. This is not, as you claim, from things like El Nino, as this would give the opposite effect. The real ocean temperature is characterized by long stable regimes (e.g. Fig. 1C, 1990-1997) which are absent from the models.

I mentioned before that at a minimum, the models should conserve energy. You’d think that this would be a no-brainer … but you’d be wrong.

You’d think that the model results would output physically possible occurrences … but you’d be wrong. For example, in "Present-Day Atmospheric Simulations Using GISS ModelE: Comparison to In Situ, Satellite, and Reanalysis Data, the GISS NASA modelers (James Hansen, Gavin Schmidt et al.) say:

It is “common” in their GISSE model (which is one of the best ones) to end up with “negative masses”, so they limit the advection globally to half the mass? … perhaps you could comment on the accepted range of uncertainty of that parameter of 0.5, and on the accuracy of the parameters in the gravity waves that lead to those physically impossible results.

In short, the models are incapable of doing what they claim to do - model the world. You are welcome to believe them without testing if you wish … me, I’ll pass until they can pass at least some bozo simple tests, like the conservation of energy and producing physically possible results …

w.

intention: However, my whole point is that you and the other folks over at ClimateAudit have set yourselves up as judge and jury over deciding what constititutes replication…and seem to have adopted a definition that is at odds with the usual definition that I have seen in the physical sciences. Amman and Wahl seem to believe that they have replicated the results and, judging by the figure here, I would agree with them that they have (at least over the 1400-2000 time period that is shown in the graph).

Mind if I just interject a quick question? You’re obviously a knowledgeable bunch (I only took enough stats to cover the bell curve, so you’re way over my head here).

My husband, who is a scientist but not a climatologist, has this theory.

Do you think that the downward dip that started in around 1950, which seems to have arrested an upward trend, was simply due to the extensive nuclear testing we were doing at that time? Wouldn’t that have sent a lot of dust (and Og knows what else) into the upper atmosphere? Might that effect have worn off years later, leading to the acceleration we’ve seen?

I remember the concern in the 1970s about the planet cooling off and a “coming ice age.” Isn’t that when we ceased nuclear testing?

I also seem to recall that when Mt. St. Helens erupted, it sent temps down a bit for a while.

Is man contributing to global warming. ? I hope so. There is so much evidence that it is happening and what the results could be. If we are not responsible , we are in for some ugly times. If we do contribute, we can cut down or maybe stop it.
Personally I think it wise to take it very seriously. We can cut down our contribution, clean the air ,and water and make this rock a better place to live. If we get off oil and improve wind and solar technology ,it will be better for all of us.
Arguing that it is not proven is a diversion. Argue that we should not cut down our oil dependency and smog polluting technology. That would be tougher.

I am not qualified to answer your question, which seems to imply a mini-nuclear winter. I can only suggest that the amount of dust injected into the atmosphere by occasional nuclear tests in the 1950’s probably pales in comparison to the pollution caused by the massive oil fires after the first Gulf War. There were predictions of dire consequences from the fires, but none of them seem to have come true.

But I would like to call your attention to the warmer planet around the middle ages which made Greenland inhabitable to the Vikings and the reverse, the Maunder Minimum, which was a cold 70 year spell around 1700CE. Neither of these is likely to have been caused by humans, but (correlate to) sunspots.

So we know our planet is capable of significant climate changes without human interaction.

My only point is while humans may be a factor today, the climate seems to have a mind of its own, and just because an average temp goes up or down for a few decades doesn’t intrinsicly imply human causes.

fessie: It is probably technically true that nuclear tests would have kicked up some small particles into the atmosphere and the net effect would have been to cause some cooling. However, the question is how large an effect that would be in comparison to other effects of this sort…such as volcanoes and the emission of particles (esp. sulfate aerosols) due to just general industrial emissions. I don’t know if anybody has tried to estimate the relative magnitudes but my impression was that the dip in temperatures during the 1940s-~1970 is generally attributed to an upsurge in volcanism and in human emission of sulfate aerosols from industry with nuclear testing playing at best a small role.

You are right that large volcanic eruptions cause temporary cooling. For example, the eruption of Mt. Pinatubo in the early 1990s can be seen clearly in the climate record and has in fact been used to test the climate models.

As for the idea that there were concerns about a coming ice age, while there were a few articles in the popular press (one, in particular, in Newsweek) in the 1970s about this, there was never a widespread consensus on this in the science community. In fact, at the same time that some were worried that we were naturally due for an ice age at some point in the not-too-distant future and some also worried about all the cooling effects of all the sulfate aerosols in the atmosphere, there was already a strong recognition of the warming role played by increasing greenhouse gas levels. A National Academy of Sciences study around 1975 concluded that while future climate change was definitely a concern and that more study of climate was needed, the science was still premature to make predictions about future climate.

It is also important to note that the 1970s marked the passage of the Clean Air Act, which was probably more important than any change in nuclear testing in terms of reducing the emission of aerosols into the atmosphere. It made it clear for the first time that we, as a society, would not continue putting more and more of these pollutants into the atmosphere. (As it turns out, because the lifetime of CO2 in the atmosphere is longer than that for these aerosol particles, the CO2 and its warming effects tend to win in the end as long as the aerosol emissions don’t continue to grow rapidly.)

Interesting, thanks for your replies. My recollection of the '70s ice age fears is based on some articles in Scholastic magazine, that was handed out in the schools (which I think I still have around here somewhere).

My husband said he thought sulfates were probably a stronger contributing factor, too; I was just hoping a simple, “crackpot” theory might hold some promise. Acid rain v. global warming, what a lovely choice.

My husband was thinking that nanotechnology might hold a stopgap cure; he says some people are working on possible solutions (based on whatever database it is that scientists use when researching paper). I hope it plays out sooner, rather than later.

There’s a lengthy piece in The Independent concerning “feedback” – sounds like the methane in Russia’s thawing tundra may have dwarfed CO2 emissions this year. My husband also mentioned a recent article (sorry, no cite) stating that the beef industry is more damaging, in total, than all of transportation combined, because methane is so awful.

Yeah…That article in the Independent is pretty depressing!

And, I think I know what it is that your husband is referring to. There was a recent UN report about the amount of methane released by cattle and other livestock that was quite worrying. The positive feedbacks such as release of methane from the tundra are also worrying.

It gets a little confusing comparing the effects of methane and CO2. Methane is a much more potent greenhouse gas molecule-for-molecule but it also one that has a shorter lifetime in the atmosphere than CO2. So, dealing with methane is very important and could help buy us some time but it won’t be a substitute for dealing with CO2 because unfortunately as long as those levels keep rising, we’re in trouble.

Wahl and Amman are not disinterested parties. They are friends and co-authors of Mann’s, so the fact that they claim they have replicated the results means nothing.

jshore, I thought said you were a scientist. I sent you a list of 21 different things that Wahl and Amman didn’t replicate … you reply that by looking at one figure, you think their results kinda look the same as Manns. Well yes, they’re similar, but even a cursory look shows that they are significantly different in various years. Does a figure that’s kinda similar constitute replication in your world? And if so, what branch of science are you in?

In the world of mathematics, replication means you get the same result. Not something similar to the result. The same. We are talking about mathematical operations on data. Not Monte Carlo simulations. Not random numbers. Not measuring physical constants. Mathematical operations on data.

As a scientist you must know that if you get different mathematical results with the same data, you’re not doing the same operations, and thus you haven’t replicated the experiment.

If you want to deal with those 21 issues, point by point, we can continue this discussion. If you want to claim that a similar graph means something in the world of mathematics, let’s take up another topic.

w.

fessie, thanks for your post. A quick reality check. Methane levels in the atmosphere have not increased in the last ten years, so if there is increased methane in Russia (which we don’t know), it’s not affecting the atmosphere.

Also, the idea of runaway global warming is nonsense. The article says:

So the theory is if it gets warmer, positive feedbacks will make it warmer yet, which will make it warmer, which will make it warmer … riiiiight.

If that were possible, it would have happened before, when the earth was warmer. The fact that the earths temperature has stayed within a fairly narrow range for billions of years shows that the balance of feedbacks cannot be positive. If it were, the earth would have spiraled into heat death long ago. The ice cores clearly show that the earth has been a couple of degrees warmer within the last 10,000 years, and we didn’t spin into heatstroke.

And in any case, methane levels are stable, so clearly the warming in the last decades has not detonated the dreaded feedback bomb.

w.

This statement is incorrect for two reasons:

(1) It is possible for a positive feedback to cause a magnification of an effect and not a complete runaway. Let’s take the water vapor feedback as a simple example and let us suppose that for each 1 deg warming due to the direct effect of CO2, there is then an additional 0.5 deg warming due to the increase in water vapor. Then, this 0.5 deg warming will feedback on itself to produce further increase in water vapor that will warm things by 0.25 deg which will then produce an additional 0.125 deg warming and so on. This is a convergent geometric series that leads to a total warming of 2…i.e., it magnifies the warming by a factor of 2.

(2) It is also theoretically possible to be in a regime where the feedbacks for the climate system are sufficiently positive that it is linearly-unstable but stabilization is eventually provided by higher-order terms. Admittedly, one could argue on the basis of the fact that the climate has been fairly stable over the last ~10000 years that the climate system was not initially in such a state before we began our “experiment”. However, we cannot rule out that our forcings will push it to a state of instability that will cause a rapid change to a different state. Yes, it won’t be a complete runaway…because eventually higher-order terms come in to stabilize it but it will still be a linear instability.

Look, intention, I agree with you that if you do the same exact mathematical operations on a set of data then, to the limits of machine precision, you will get the same result and this will be the sort of exact mathematical replication that you are speaking of.

However, and this is the important point which is why I am bolding it: In the physical sciences when it is talked about researchers having to give enough detail for others to replicate their results, this is not the standard that is generally demanded. I know of absolutely no journal in the physical sciences that requires authors to give enough details of their work to allow this standard of replication. It may be that you feel that this should be the standard. Then, by all means, it is your perogative to try to fight for a major sea-change in the way that research in the physical sciences is done. However, to demand that Mann et al. live up to a standard that is not in fact the standard that is expected of him by the journals he publishes in or by his peers in the field is simply not realistic.

Look, I gave you an example of Monte Carlo simulation but in fact I could give you an example with no random numbers involved: The code that I use at work has a step where we take experimental data and interpolate it for further use in the modeling. The interpolation routine is complicated…in fact, iterative. To allow people to replicate my calculations to the degree that you are demanding would require me practically publishing the interpolation routine code line-for-line. This is so far from anything that any journal in the field would ever demand from me as to be frankly amusing to contemplate!

Now, in my own work with my own computer codes, I do sometimes like to make sure that a change I make to the code is still resulting in the same results down to this level of accuracy. Other times, I have made changes to the code that I know may actually cause changes in the results that are not physically significant but do not constitute exact replication in this strict mathematical sense. For my own edification, when I am recording modifications to my code, I try to make a special note to myself when I do make a change that I think can make small mathematical changes in the results. However, as I will repeat again for emphasis, when scientists talk about other scientists having to give enough details to allow replication of results, they are not talking about it having to be done to the level of detail necessary for this to be true in a strict mathematical sense. The sense of replication that is meant is the sense of getting the same basic physical result. This is clearly true in experimental work where it would be impossible to have a higher standard. However, even in numerical work such as Mann’s where it is t least in theory possible to demand this higher standard for replication, that is not the standard that is demanded.

Thanks for your post, intention. Unfortunately, I would have to say that your summary is a very poor caricature of what I wrote. Frankly, if you don’t understand what I wrote or what Santer et al. wrote, I don’t know how to explain it any better. However, Santer et al.'s work and its importance was clearly recognized by the reviewers of that paper (otherwise it wouldn’t have been published in Science) and its main conclusions were closely echoed by those of the report from the U.S. Climate Science Program. And, for what it’s worth (and, frankly, I don’t expect others to think is worth much!!), I have now read the paper in detail and I understand it and its importance.

When I am not an expert in a field and the experts seem to think a paper is correct and important and I don’t understand why they feel this way, I have enough humility to accept that this could be a defect of mine and not of the paper. Frankly, you do not seem to suffer from such humility of this sort. As I noted before, you are of course welcome to your own opinion and welcome to express that opinion. However, I have a very hard time understanding why you expect us to take that opinion so seriously.

Have you ever heard the cliche “All models are wrong but some models are useful.” The point is that no model of a physical system is without any deficiencies. There is a large variation in the pressure (and thus density) as you go up in the atmosphere. Even the bottom of the stratosphere has densities almost an order of magnitude lower than the density at the surface. Thus, it does not horribly surprise me that when the densities get quite small high up in the atmosphere, the models can make large fractional errors in the values for these densities. This does not make the models useless.

And, as I have noted before, we have a lot more to go on besides the models. We have the basic physics. We have past climatic changes that provide estimates for the climate sensitivity as well as tests for the models. Etc. Etc.

However, even if we did not have all this, it would not relieve us of the responsibility to make decisions in the face of uncertainty. If you believed in never waiting until you are sure that a threat is very serious before taking action, you would be against a lot more than any limit on greenhouse gases. For one thing, as I have noted before, you would have been adamantly against the Iraq war where the evidence was actually completely pathetic and we were told to take things essentially entirely on faith.

I don’t understand how a point of view based on exaggerating the uncertainties, as yours does, leads to the policy prescription that we must oppose any actions that are not endorsed by Exxon-Mobil and Western Fuels Association (and in fact that we must oppose some actions that are endorsed by BP, Shell, and many of the power companies in the U.S.). Your views are so extreme that they are rapidly becoming irrelevant in the real world, as they should.

This is a correct statement as far as it goes. However, the theory of anthropogenic climate change is not just saying, “The temperature is rising so it must be humans that are causing it.” Rather, there is a whole field called “detection and attribution” that is dedicated to detecting the change in the climate and attributing it to possible causes.

Also, independent of this, from the basic radiative properties of CO2, we have a good idea of what sort of perturbation we are putting on the climate system by increasing the levels of CO2 in the atmosphere. This is usually summarized as a “radiative forcing” expressed in W/m^2 and while it is an approximation that the effect of CO2 levels can be summarized by a single number, it is likely to be a not-too-unreasonable one. And, by studying the past changes in climate, we actually get an idea of how sensitive the climate system has been to past perturbations and this gives us a good idea of how much warming we expect to occur in response to a given change in CO2 levels.

This post of yours goes back a ways and I had meant to respond to it earlier: In fact, BP has been releasing lots of information about their sustainability practices…and has had these statements auditted by Ernst & Young.