How will global warming play out?

As I’ve said before, don’t trust anybody, particularly in climate “science”. My beloved grandma used to say “You can believe half of what you see … a quarter of what you hear … and an eighth of what you say …”

I brought up the issue of “appeal to authority” regarding your use of the well-known climate authority Sherwood Bohlert, who as we all know is totally unbiased about politics … plus which, he’s so dumb, he thought that the NSF had cleared Mann of the charge that he was refusing to share data, while in fact Mann was publicly boasting to the newspaper about refusing to share data. Reading the paper you linked to, my read is that his (Bohlert’s) real beef was that he thought Barton’s claim should have been handled by his committee, and he was pissed off that someone else was getting the limelight.

One beauty of the internet is that in many cases, we no longer need to depend on authority. I have pointed out several times that the data is available for this. Do the analysis yourself, and see what you get. I am not advising you to trust me, or Sherwood, or anyone else. Do I know the facts about Mann? I think I do, but don’t trust that either. Do the research yourself.

That’s what I did with the Santer paper, I looked at the model results, and found them to be … well, laughable. But don’t trust me, do the analysis yourself.
Depending on anyone in the politicized, polarized field of climate science is a mistake. I hold my views because I’ve done the math myself, run the numbers myself, did the analysis myself.

Certain things are so obvious they don’t require much scholarship. You wouldn’t believe an experiment claiming cold fusion unless other scientists were able to replicate it. If the scientists who claimed to show cold fusion refused to share their data and methods, you’d write it off as crank science.

Yet you defend Michael Mann, who not only did the same thing, he not only wouldn’t reveal his data and methods, he went so far as to claim that to ask him for data and methods was “intimidation” … I’m asking you simply to apply common sense, not to believe anybody in particular, particularly in the field of climate science. You’re used to scientists in the field of physics, who most of the time you can believe. Climate “science”, on the other hand, is filled to the brim with charlatans, opportunists, and people writing statistics-based papers who have no clue about statistics. One of the best overviews of the poor state of climate science is here … read’m and weep …

w.

PS - Mann’s exact quote, boasting of his egregious violation of normal scientific practice, from the Wall Street Journal, Feb. 14, 2005 , was (my emphasis):

A scientist refuses to say how he got his results, and you defend him … would you even consider doing this if the subject were cold fusion?

Maybe it was because the NSF had actually said that Mann’s interpretation of his obligations was correct and Mann was not boasting about refusing to share data but just stating his belief that he should not be intimidated into sharing stuff (such as the actual computer code) that is legally his own intellectual property:

Look, the point is that the scientific obligation is to publish enough details of the data and method used to allow someone else to try replicate the results. This does not mean that you are obligated to give out your entire computer code. That is your intellectual property. Even in science, some respect for intellectual property must be maintained. Otherwise, what is to stop me from doing research simply by stealing and using other people’s computer codes that they have invested a long time in creating?

I am not saying that Mann’s choice of what to release or not release was absolutely optimal. (And, in fact, I believe that he has changed his personal policy for the future.) However, he is well within his rights to choose not to release the computer code to M&M. And, in fact, I did watch the web video of Mann before the committee and as I recall, by the time Mann got before them, they had pretty much dropped this issue of his release of information…and were focussing instead on the scientific arguments (and maybe to a much lesser degree whether the NSF policies on sharing were the optimal ones).

I just want to interject that these attacks on Mann, if all he did was withhold the actual code, while releasing the data that he used, are childish.
If the data is there, and you have the results as well, then replicating - or not - the results shouldn’t be that big a deal. There are some data sets out there publicly available on the Internet for historical temps for various timeframes. I’ve run the numbers myself and been able to see, after smoothing out using either moving averages or regressions and then graphing, the Little Ice Age and the famed Medieval Warm Period, and been able to see as well that current temps are running ahead of anything seen prior, and that the trend is accelerating very rapidly to the upside.
Took me a few hours one lazy Saturday. Not exactly rocket science.
Now, if you want to dispute the validity of those data sets, since they’re based indirectly on interpretations of tree rings and gases trapped in ice cores or when harvests took place (which I thought was clever, since any one year’s timing might be affected by random chance, but over many years you could definitely see a trend) or, at the very best, readings taken on primitive thermometer-type instruments, that might get you a little farther, but you’d have to prove that peer-reviewed data is incorrect, and your interpretation just happens to be the right one.
I wouldn’t lay odds on that, but some might.
Carry on.

jshore, thank you for your post.

I used the HadCRUT3 dataset, which I have cited previously in this thread. This is Phil Jones’s dataset, and is the most widely used.

I have not tried to reconcile my results with theirs, other than to note that my results basically agreed with theirs, but that they had not accounted for autocorrelation in their calculations (although they mentioned it further on in the paper, they did not account for it.)

As they point out, you need to adjust for autocorrelation (which they call “persistence”) in the calculation of confidence intervals. However, they did not do so. I used the standard method of Nytchka to do so, which involves reducing the degrees of freedom based on the autocorrelation. I used annual temperatures, with the monthly anomalies removed.

Also, to make my calculations parallel theirs, I did not allow for the underlying uncertainty in the temperature measurements, which is large. What I have quoted is only the statistical uncertainty. The underlying uncertainty in the data is about the same size as the statistical uncertainty, but they ignore that completely. Assuming the errors add in quadrature, this increases the uncertainty in the trends by sqrt(2), which makes the difference even less significant. The data errors in the instrumental dataset are detailed in the citation I gave above.

As you point out, there is nothing magical about a 95% confidence interval … but it is the lowest one in common use. I have never seen a study that proposed using a lower one, and many studies use a 99% confidence interval.

Nor can you use an “approximation” that the century long trend is exact, as you propose. Let’s stick to the real math, and stop talking about lowering the bar and using approximations, shall we? I thought you were a scientist. Would you accept an 86% CI in your work, as you propose here?

The differences are not significant even without including the uncertainty in the data. Including that, they are much further from significance. This casting of statistically meaningless events as being important, is unfortunately all too common in climate science.

w.

PS- never did get an answer to my question about the fact that there are six 20-year periods in the record that are outside the 95%CI, but they are in the 1920s-1940s … what is your explanation for those years?

You are 100% correct, replicating Mann’s methods shouldn’t be a big deal. But it was. The problem was that he hadn’t described what he had actually done. Several investigators reported that they couldn’t replicate his work, and asked him for the code. At that point, a scientist would say “sure, here it is”, after all, it’s not like it’s a business secret or something … but not Mann.

Nor did he just “release” his data, he doled it out grudgingly in bits and pieces after claiming he had “lost” it. Some of it he never released, in particular the research showing that his results were not “robust”, as he had claimed. These were discovered in a file called “CENSORED” on his hard drive …

In fact, his research was so bad that he had to issue a “Corrigendum” to Nature magazine to try to plaster over his errors … and that alone should tell you something about the magnitude of the errors in his paper, as Nature is very loath to publish those, and will not do so for minor errors. There is a good discussion of the issues surrounding the Corrigendumhere.

Unfortunately, even the Corrigendum did not allow researchers to replicate Mann’s work, and that’s why he ended up getting an invitation from Congress.

w.

You are right that under the NSF rules in force at the time, Mann was not obliged to reveal his codes. However, we were discussing scientific ethics, not NSF rules (which have since been changed, by the way).

Nor were we discussing whether his choice was “absolutely optimal”. The point was, without the codes, his results were not replicable. In other words, he had not met the standard that you describe above as

The problem for him was, once the code was revealed, it would also be revealed that he had lied about whether he had calculated the R^2 significance of the results, along with a host of other problems, some of which he was aware of.

In fact, it’s doubtful whether the codes were Mann’s intellectual property, because as an employee of the University of Virginia, the work he did belonged to them … but I digress.

I listed above a series of places where the NRC Panel specifically stated that Mann made errors in his work. I note that you have not responded to these. Let me state them again.

Since you have not commented on these, I assume that you agree that the “Hockeystick” has been discredited. The errors disclosed in his PC method and noted in the NRC report “mine” for hockeysticks, and several researchers have noted that using his method with random “red noise” pseudoproxies regularly generates hockeysticks … I can see why Mann wanted to hide his code, because without it, we would never have known the kind of mistakes he made, and what the effects of these mistakes were.

Like I said, the point here is science. His “science” could not be replicated without his codes. He ended up with a dilemma (more accurately a “trilemma”) — either show his hand and expose his lies and mistakes, or repudiate the “Hockeystick” entirely since it couldn’t be replicated, or hide behind legalities. He tried the latter path, and was called on it by Congress. I’m not happy with that outcome, I’d have greatly preferred it if Mann had just come clean, but Mann brought it on himself — I won’t waste any tears on him. His work was flawed, he tried to hide that fact, and he got caught.

w.

@JShore

Thankyou for replying to my post, the questions were slightly intrusive, but I was genuinely interested in trying to work out where you are coming from.

I don’t have any problems flying, I’ve done more than most, although the only time I ever felt queasy in an aircraft was when I was in the cockpit of an A320 and the pilot pointed above his head and said: ‘there are three computers up there’
I said: ‘what happens if they disagree ?’
He said: ‘They vote on it’
I did not feel at all well for a few minutes.

I don’t trust doctors, I have some who are personal friends, and have met quite a few.
My beef is that they have very poor diagnostic abilities, the attention span of a dragon fly, and a naive belief in drugs.

I do actually know a fair bit about medical testing, where I live we have a few drug companies and government research centres, and I enjoy picking peoples’ brains.

Regarding Iraq, I believed the WMD stuff at first, and tentatively supported the UK going along on the basis that we could restrain the USA. Obviously I knew that Sadam would have nothing to do with Al Qaeda - the idea is ludicrous. I also knew from 1991 that Iraq was like Yugoslavia. From the beginning I believed and still believe that we should have hijacked the Ba’athist party - I expect I can dig up a NG post where I excoriated someone for deriding the idea.

When I was at Uni, rather a long time ago, we ‘liberal studies’ guys tended to deride the views of people doing pure sciences. Not very nice, but we doing Politics, Philosophy and Economics were carefully preselected to ensure that we were cynics - or more accurately that the majority of us were cynics.

You come from an area where things are observable and testable, you would probably be very shocked if you sat in on a meeting with the senior actuaries of a major insurance company - actually you would probably be shocked by quite a lot of things.

That, in a way, is a compliment, unlike me, you do not have a crooked mind that is kept in check by a set of ethics.

You are extrapolating on figures that you trust, I am looking at the ‘angles’, and the scum bags that are exploiting the FUD.

You, I and Intention are looking at things from different angles, you see a model, Intention sees faulty methodology, I see a monumental scam.

Well, what I see is field of science which, while it still has lots of remaining uncertainties to be resolved, is well-developed enough to tell us that our emissions of greenhouse gases constitute a real problem.

And, when you say you see a monumental scam, you have not explained how such a monumental conspiracy could be carried out? How could such a huge array of scientists either be perpetrating or be taken in by this scam? And, the same is true of politicians including ones like John McCain and Arnold Schwartznegger who are bucking many in their own party. And, there are even energy companies like BP and Shell and a growing number of power companies who are part of this.

And, you seem to fail to see the possibility of the scam on the other side, which is strange given that the doubts about climate change seem to be emanating from a small cadre of scientists, many of who are funded by fossil fuel industries or are closely allied with conservative or libertarian think-tanks (many of which also get funding from Exxon, for example).

So, you are someone who is looking at a landscape where one side includes not only left-wing and environmental groups but also the mainstream scientific organizations and many center- or even right-of-center politicians and now even many energy and power companies, and on the other side includes almost exclusively right-wing or libertarian-groups, right-wing politicians, and some energy companies. And, you are concluding that the first group must be perpetrating a scam on us (or maybe some of their own coherts)? Do you see why this might not be the most logical conclusion even if you are a cynic by nature?

…we start seeing very violent and starnge effects-like no more winter snows in Canada, sorching summers in Florida, trees colonizing the arctic tundras, hurricanes in northern lattitudes: would this sway public opinion? Or would people decide to accept it? Suppose Canada were to have a climate like present-day Florida; it is conceivale to me, that a fair number of canadians would like this!

Thanks for your posts, intention!

My comments here would be:

(1) NSF rules are meant to support what they see as ethical obligations, among other things. You claim that the NSF has changed its rules since. Do you have a cite for this? I would be quite surprised if they had changed it in a way that would obligate scientists to release their computer codes.

(2) Mann has claimed that other people were able to replicate his results. Do you dispute this claim?

This statement strikes me as bizarre for two reasons:

(1) I doubt that it is correct. I don’t think universities generally make such strong claims on the intellectual property of their professors. For example, I have never heard of a university insisting that a professor could not take his or her computer codes with them when they moved to another university.

(2) Even if you were correct, I don’t see how it would help your argument. In fact, it might well mean that Mann would not even have the right to release his code without university approval. Let me give you an example of how things work in industry where companies do make strong claims on the intellectual property developed by their employees: I had a paper published in a physics journal last year. If someone requested that I send them the computer code that I used to do these calculations and I did so, I could very well be fired by my company. In fact, not only could I, but in this particular instance I would venture to guess that I would be fired or at least severely reprimanded. (They would also probably insist that the person I sent the code to return it to us or destroy the copy they have.)

You seem to have overlooked my post #92 that commented on some of these and also noted passages from the executive summary of the report. There is nothing in that that I see that would say that Mann’s work has been discredited. And, some of your statements, e.g., in regards to bristlecone pines, I could not find support for in the report in the way that you phrased them. Yes, it was found that the Mann et al. study was not perfect…but they pointed out that his main conclusion has been supported by subsequent studies and that, while they feel the uncertainties are great enough that they are less confident in the proxy data going back the a full millenium, they nonetheless find the Mann et al. claim over the full millenium “plausible” given his work and the subsequent work of others.

On this particular point, the NRC report is quite clear:

So, in other words, while the method they used could in principle cause problems, it did not in practice do so. This is not uncommon in science that you get the “right answer” by a method that is not entirely robust and could fail in some instances. (In practice, most good scientists will play with the data in a variety of ways and then, while choosing one method to write up for publication, will have convinced themselves that the result is not strongly dependent on the details of the method. I don’t know for a fact that Mann et al. did this…It could be that they were just lucky…but I would guess that they probably did.)

For someone who in a previous post had chided me for making inferences about motivations (in that case of energy and power companies who support caps on greenhouse gas emissions), you seem quite willing to engage in such speculation yourself.

Sounds interesting. And, in all sincerity, I recommend that you try to get it published in a journal. This is the way to (1) have it go through the referee process so at least any basic errors might be detected and (2) get it out into the scientific literature so others can see it, comment on it, and even try to replicate it.

Well, as I understand it, the accepted explanation of the rise in temperatures during that period is that it is due to a combination of effects…an increase in solar luminosity, a lack of major volcanic eruptions, and a small contribution from increasing greenhouse gas levels. There is no law of nature that I know of that rules out having global temperature trends during a 20-year period due primarily to natural causes that have a trend different (in a statistically-significant manner) from the trend over the entire century.

The reason the late 20th century trend is particularly interesting is because noone has been able to figure out how it can be attributed to natural causes and its signal is compatible in various ways with what is expected from increases in greenhouse gases.

Well, the point is that the scientific enterprise is set up in a way that scientists monitor each other. And, while it may not always be perfect, I think it has worked very well on the whole…and better than any alternative that I know of.

The basic point is that the sort of software you are talking about has certain very narrowly-defined missions to serve and well-defined regimes that it is supposed to be able to do its task under. Models in the physical sciences, on the other hand, serve many tasks and there are many different ways that one could conceivably test them to see how well they model reality.

Well, the cost-effective way to cut CO2 emissions is to put a pricetag on those emissions so that the market recognizes the costs associated with those emissions and develops the most efficient solution. That is what a carbon tax or emissions cap does. This isn’t a problem that has some magic bullet solution. However, the first step is admitting that there is a problem…and there clearly is a problem with the market currently believing that there is no price associated with using the atmosphere as a free sewer.

And, by the way, as I have noted previously, because of the way markets work, the cost of environmental regulations tends to be significantly overestimated by not only by the economic doomsayers associated with the affected industries but also by the more optimistic estimators at, say, the EPA. In the case of Kyoto-size cuts in emissions, BP made such cuts 8 years ahead of schedule and claims that they are saving it money in net:

In other words, it would have been in their best interests to make such cuts even if greenhouse gases were not a problem! Talk about low-hanging fruit!

So, are you making the claim that even with the latest version of their code, the UAH group gets a negative sign for the temperature trend if they look over the same period that they did before? This seems rather unlikely to me…as it would imply a strong acceleration of the warming over time over the last 10 years and my impression is that the rate of warming has been fairly constant over the last 30 years or so. So, yes, I understand that some of the change in trend in time may be due to the lengthening date set but I don’t think most of it is…I think most of it is due to the correction of the various errors they had in their data set. And, alas, it seems that all the major errors that they had changed the trend in the same direction.

First of all, the difference between v5.1 and v5.2 only shows the correction due to the last major error that has been found in the UAH data sets. There were many errors before this and, as their file documents, there have been some smaller corrections since then too. However, it is strange that you call the correction that minor. It may look small the way that things are plotted, but note that making the change actually reduces the difference in the trend line between UAH and RSS by ~35% from what it was with UAH v5.1.

First of all, the Santer paper appeared at about the same time as that UAH correction was being dealt with. I believe that it used UAH v5.1. (Correct me if I am wrong.) More importantly, however, I noted to you the distinction between the global temperature record where the data sets are essentially in agreement within error bars and the tropical temperatures only where there is still some statistically-significant discrepancy between the data sets and the models (although the U.S. Climate Science Program report believes that the more likely explanation for this is problems with the data and not the models, just as Santer et al. also tried to make arguments for). Note that the Santer et al. paper deals with the tropics only.

jshore, thank you for the tone of your posts and your attention to detail. I did in fact miss your response to my questions in your post #92, and will deal with it. I am breaking your long and interesting posts into parts, to focus the discussion.

Not true. They specifically said don’t use them. On page 52, the report says “… ‘strip-bark’ [bristlecone] samples should be avoided for temperature reconstructions …”.

The reason is that bristlecone (stripbark) proxies diverge widely from local temperatures during the last 150 years, indicating much higher temperatures than actually observed. As a result, they erroneously impart a “hockeystick” shape to the reconstruction.

And since we only have local temperature going back 150 years (much less in many areas), it is it impossible to calibrate them properly so that we could convert the ring-width to temperature.

A paper by Biondi and Hughes (one of Mann’s co-authors on the “Hockeystick” paper) describes the problem clearly (emphasis mine):

Since bristlecones are not reliable proxies for the past 150 years, they should not be used in a paleoclimate reconstruction, as noted by the NRC (NAS) panel report. You say that the Osborn/Briffa study confirms Mann’s results …

The problem with the O/B study is that they used the bristlecones as well, and thus their study, like Mann’s, is fatally flawed. This is particularly true for the Medieval Warm Period results, as these are strongly depressed by the bristlecones. And this does not even begin to list the problems with the O/B study. Among the other problems, they, like Mann, refused to reveal their data. Despite repeated requests, they won’t reveal what data they used for the Yamal, Tornetrask, Taimyr and Alberta proxies… what’s up with these guys?

Note that this is not code, it is data. Without it, their study cannot be replicated, and even you would have to agree that it is useless.

w.

Certainly, the exact choice of tests done during V&V and SQA depends on the exact type of software being tested. But V&V and SQA are not dependent on the goal of the software, it can be used on any type of software. Nor are we attempting (yet) to find out how well the GCMs “model reality”. V&V and SQA starts by looking at much lower-order questions. Let’s start with the most bozo one - are there errors in the code? Does it call the right subroutines at the right times? Are all subroutines called? Do the equations converge under all conditions? Are the proper equations used in the proper places? How does it handle boundary conditions, and are those methods appropriate? We don’t know the answers to even these lowest-level questions.

Once we know that the underlying code does not contain bugs or obvious errors, then we can start to compare them to real world constraints. Are the various parameters and flux adjustments physically reasonable? Are the equations used the right equations for the physical conditions? Are the equations unconditionally stable for any time step, positive definite, and exactly mass conserving? Are the discrete equations and numerical solution methods consistent and stable?

Only after we have answered these and a host of other questions will we move to the final step of, as you have noted, seeing how well they “model reality”.

Perhaps you are willing to just assume that the modelers involved have done all of this. Given the widespread disregard for scientific norms in the climate science community, I have no faith in that assumption at all. In either case, it would be incredibly foolish to spend billions, perhaps trillions of dollars based on an untested assumption that it has been done.

w.

By no means. As I said before, until you supply some time frame for the negative trend that you are talking about, I can say nothing other than that there are negative trends at times in both the RSS and UAH datasets.

w.

And we know this how? jshore, have you suddenly started trusting the unsupported statements of oil company executives? Gotta confess, that surprises me … did he open up the books so his statement could be verified? Did he detail how they claim to have done it?

Didn’t think so …

The hard truth is that countries have already spent billions and billions of dollars trying to reach Kyoto goals and failing. Perhaps you can explain how that didn’t cost money.

w.

I think the Santer paper probably used the 5.1 dataset, they don’t say, but it fits the 5.1 data much better than the 5.2. However, look at Figures 1 and 2 in the Santer paper. The difference between trends in the models themselves is huge. The difference between the different datasets is tiny.

The 5.1 version tropical lower troposphere (T2LT) temperature trend was about zero. The current trend in 5.2 is 0.06°/decade. This, as we’d expect, is larger than the global change of 0.035°C in a decade.

But there is another twist in the tale. Since the Santer paper, in addition to the error RSS found in the UAH work, they have found an error in their own work and issued a new version, 3.0. In the tropics, this amounts to a difference of +0.38°/decade, which leaves the difference between the two versions nearly as large as before both errors were discovered.

(Note that I am not saying that the RSS data now can’t be trusted, as the discovery and correction is a normal part of the scientific process.)

However, the difference between the RSS and MSU figures (which is smaller but not erased after the two error corrections) pales in comparison to the differences between the models. Some show huge warming, some slight cooling, and some everywhere in between. Given that the disagreements between the models are an order of magnitude larger than the differences between the four instrumental datasets, why do you think we should choose the models over the data?

w.

PS - I note that the correction of the error in the UAH dataset makes it agree with the two radiosonde datasets, while the correction in the RSS dataset puts it further from the other three instrumental datasets …

I also note that the RSS algorithms for calculating temperature are based on GCMs, while the UAH algorithms are based on physical principles. This makes it much less surprising that the RSS dataset should agree better with the models than the other three instrumental datasets.

Name one such group. Never happened. Your habit of making claims without citations is very distracting, because there are likely people out there who believe you.

Name one. From here , I offer you a list of the issues that needed to be overcome:

Cubasch couldn’t replicate MBH98.

Wahl and Amman couldn’t do it, even with the help of the Corrigendum.

Name one.

w.

That or something else is going on, or possibly we are misinterpreting the data.

  • also can we really do anything about it ?

I do not see it as a conspiracy - it looks to me like a band wagon

If I were advizing Shell and/or BP I would recommend that they take an ‘enviromentally sound’ approach.

What do you really think about wind power ?
Do you really think that ‘carbon taxes’ will do anything to benefit the environment ?

Do you really think that the UK Conservative party really wants us riding around on bicycles ?

This is a band wagon, there is no downside for getting on it, and considerable downside for saying: ‘I see no clothes on the king’.

I smell a myriad of spontaneous scams.

Thanks for your posts, intention and FRDE.

In post #110 I did give you a cite to Mann’s claim of people who had replicated it and asked if you disagreed with his claims there. See in particular his footnote #3 on p. 4 of that letter.

I am traveling for the holidays…So I will probably be pretty silent for a while. Happy holidays to all!