The Straight Dope

Go Back   Straight Dope Message Board > Main > Great Debates

Reply
 
Thread Tools Display Modes
  #51  
Old 09-13-2007, 12:48 PM
jshore jshore is offline
Charter Member
 
Join Date: May 2000
Posts: 6,460
Quote:
Originally Posted by erie774
Today, I saw this press release about published evidence from peer reviewed journals that disputes many of the aspects of AGW and claims that our climate change is all part of a naturally recurring cycle of warming and cooling. I realize that the release came from the Hudson Institute , a “think tank” that is slightly to the right of Pat Buchanan (hey, they had Bork, Haig and “Scooter” Libby associated with them).
By the way, the actual list of scientists that supposedly have published on some aspect disputing the AGW consensus is available from the Hudson Institute here. Note that it is a list of scientists as they promised and does not actually give any references to the papers that supposedly dispute the consensus. Furthermore, you might have noticed that the press release itself says:

Quote:
"Not all of these researchers would describe themselves as global warming skeptics," said Avery, "but the evidence in their studies is there for all to see."
This statement about how the researchers might or might not describe themselves seems like a bit of an understatement as their list includes at least three regular contributors to the RealClimate.org website (Gavin Schmidt, Michael Mann [yes...the very same one who the contrarians pillar over the "hockey stick" graph!!!], and Stefan Rahmstorf)! As many will know, RealClimate is a website set up by climate scientists to expound on the subject and is, to put it mildly, no friend of the AGW doubters!

Another name I recognized on their list is Brian Soden, who is one of my personal favorite climate scientists and has done very good work showing how the climate models seem to be handling the water vapor feedback essentially correctly and how they do a good job correctly getting the dip in temperatures that occurred after Mt. Pinatubo erupted with a full climate model but not with one in which the water vapor feedback (that amplifies climate change) turned off. This in my view is some of the most elegant work suggesting that the climate models are getting the climate sensitivity calculation at least roughly correct.

And, then there is William Ruddiman, probably listed because some contrarians like his controversial hypothesis that man might have put enough greenhouse gases into the atmosphere over the last 8000 years since agriculture developed (due to clearing land and releasing methane through, e.g., rice cultivation) that we stopped the climate from going into another ice age. Clearly, this hypothesis has some appeal to them since it argues that our influence on the climate has been a good thing. However, unfortunately for them, it does not argue that we should now be putting way more greenhouse gases into the atmosphere...and in fact seems to require a climate sensitivity to greenhouse gases that is at or above the upper end of what range the IPCC estimates as most probable, implying the AGW will be at the high end or beyond of the IPCC projections!

In summary, I think that we can conclude that this list is pretty much completely bogus. The only semi-interesting question is what fraction of those listed are scientists who really do challenge the consensus.

Last edited by jshore; 09-13-2007 at 12:52 PM..
Reply With Quote
Advertisements  
  #52  
Old 09-13-2007, 12:53 PM
Lightnin' Lightnin' is offline
Guest
 
Join Date: Jan 2001
Quote:
"Not all of these researchers would describe themselves as global warming skeptics," said Avery, "but the evidence in their studies is there for all to see."
Wow... they might've just as well said, "Yeah, we're makin' this shit up."
Reply With Quote
  #53  
Old 09-13-2007, 03:12 PM
LilShieste LilShieste is offline
Guest
 
Join Date: Dec 2001
Quote:
Originally Posted by brazil84
But are they really oversights? Is it possible for two papers that have made it through peer review and been published to reach contradictory conclusions?
Of course it's possible - there are humans involved. In the words of my Logic professor, though:
Quote:
There are only two chances of this happening: fat and slim.
In fact, I can't think of a single scenario in which two completely contradictory papers would get simulatneously published in the same journal (or similar journals).

The peer-review process doesn't just examine the data that has been collected, the equations used, etc., it also examines the conclusions that are drawn in the paper. If these two scientists have truly conflicting results/conclusions, then at least one of them has some problems with their research and will be called on it.

Quote:
Originally Posted by brazil84
I would imagine that it's not always possible to know whether the judgment is incorrect at the time of publication.
I'm not sure what you mean by this. Can you provide an example of some sorts (even a fictional one), so I can better understand?

Quote:
Originally Posted by brazil84
No, that's not how I used it. Another poster tried to score rhetorical points based on the fact that I didn't know the answer to somebody's question. I was using intention's quote to basically show that there's no shame in admitting that one doesn't know something.
The question that was asked, though, wasn't one that required any kind of specialized intelligence, so "I don't know" wasn't really a valid answer. It's like asking, "which is the larger amount: 50 cents, or 1 dollar?"

(Paraphrased, the question was, "Which has more scientific validity: a paper published in a peer reviewed journal, or research that is instead given straight to the mass media?")

And is anyone trying to debate that there is shame in admitting that one doesn't know the answer to something? If not, then it was a strawman.

Quote:
Originally Posted by brazil84
Unfortunately the question isn't totally resolved. Does "mistake" can include errors in judgment that can not immediately be seen as being incorrect, even when pointed out? In other words, do you agree that a situation can arise in which scientist A makes judgment call A, and scientist B makes judgment call B, leading to contradictory results, but it's not immediately clear which judgment call is correct?
If scientist A and scientist B each come to different conclusions, based on a single body of evidence, then problems should be identifiable in one or the other's (or both) papers. As I mentioned earlier in this post, though, I may just be misunderstanding what it is you're trying to claim here.

If you're asking if it's possible for two scientists to conduct independent research, and reach completely different conslusions, then the answer is "yes", and this type of thing would be caught in the review process. If both conclusions cannot be correct, then there are some errors to be found.

Quote:
Originally said by Avery, in the linked article:
Not all of these researchers would describe themselves as global warming skeptics,but the evidence in their studies is there for all to see.
I somehow missed this when I skimmed the article. Brilliant!


LilShieste
Reply With Quote
  #54  
Old 09-13-2007, 09:44 PM
jshore jshore is offline
Charter Member
 
Join Date: May 2000
Posts: 6,460
Quote:
Originally Posted by jshore
By the way, the actual list of scientists that supposedly have published on some aspect disputing the AGW consensus is available from the Hudson Institute here. Note that it is a list of scientists as they promised and does not actually give any references to the papers that supposedly dispute the consensus.
My bad on this...If you go to the full press release [PDF file] on the Hudson Institute website, they indeed list the relevant papers. Thus, we are able to see why certain scientists make the list.

Michael Mann and Gavin Schmidt made the list because of a paper entitled "Solar Forcing of Regional Climate Change during the Maunder Minimum". So apparently, a study that claims that a decrease in solar irradiance was the cause of the colder global temperatures (and even more notably, regional climate shifts) in the 1600s is evidence against AGW. Strangely enough, this study relied heavily on the very climate models that the AGW skeptics critique.

Stefan Rahmstorf made the list for a paper entitled "Possible solar origin of the 1,470-year glacial climate cycle demonstrated in a coupled model". (And, there are a total of 8 authors on this...So it increases their author count by 8.) Again, this is a paper that uses "evil" climate models to demonstrate their effect. It is also interesting that this would be classified as supporting Singer and Avery's hypothesis since the abstract very clearly says that it is an effect that operates only under glacial conditions and would not during the current interglacial period (Holocene):

Quote:
We attribute the robust 1,470-year response time to the superposition of the two shorter cycles, together with strongly nonlinear dynamics and the long characteristic timescale of the thermohaline circulation. For Holocene conditions, similar events do not occur. We conclude that the glacial 1,470-year climate cycles could have been triggered by solar forcing despite the absence of a 1,470-year solar cycle.
And, just in case that is not clear enough, they repeat this notion in their concluding paragraph, making particular note of the lack of any clear and pronounced observed 1470 year climate cycle during the Holocene:

Quote:
Our results indicate that the observed 1,470-year climate cycle could have originated from solar variability despite the lack of a 1,470-year spectral contribution in records of solar activity. Moreover, the 1,470-year climate response in the simulation is restricted to glacial climate and cannot be excited for substantially different (such as Holocene) boundary conditions; for these, the model response shows the frequencies of the applied forcing (86.5 and 210 years), as also documented in various climate archives. Thus, our mechanism for the glacial 1,470-year climate cycle is also consistent with the lack of a clear and
pronounced 1,470-year cycle in Holocene climate archives.
It certainly takes a lot of creativity by Avery and Singer to conclude that either of these two paper's supports their point-of-view!
Reply With Quote
  #55  
Old 09-14-2007, 06:19 AM
brazil84 brazil84 is offline
Guest
 
Join Date: May 2007
Quote:
Originally Posted by LilShieste
I'm not sure what you mean by this. Can you provide an example of some sorts (even a fictional one), so I can better understand?
You can start by looking at this paper:

http://eaps.mit.edu/faculty/lindzen/...01GL014360.pdf

Do you agree that it has passed peer review? Do you agree that it apparently contradicts, to some extent, the conclusions of other published work?

Frankly, I don't understand the paper 100% so I will try to either read it more carefully over the weekend or find a different example if this one is unsatisfactory.

Quote:
The question that was asked, though, wasn't one that required any kind of specialized intelligence, so "I don't know" wasn't really a valid answer. It's like asking, "which is the larger amount: 50 cents, or 1 dollar?"
If you ask that question to somebody who is not familiar with American currency, it would be reasonable for them to respond "I don't know." In my case, I don't know enough about peer review to answer the question. My only experience with peer review was when my spouse's dissertation was being reviewed for publication. Honestly, I wasn't too impressed. It seemed like the reviewers' comments said a lot more about the reviewers' personal agendas than it did about the merits of the dissertation.

Quote:
And is anyone trying to debate that there is shame in admitting that one doesn't know the answer to something? If not, then it was a strawman.
How about you explain what you understood this comment to mean:

Quote:
If that one's a stumper, the bigger ones are going to be really tricky.
Reply With Quote
  #56  
Old 09-14-2007, 06:37 AM
Hentor the Barbarian Hentor the Barbarian is offline
Guest
 
Join Date: Jun 2002
Quote:
Originally Posted by brazil84
No, that's not how I used it. Another poster tried to score rhetorical points based on the fact that I didn't know the answer to somebody's question. I was using intention's quote to basically show that there's no shame in admitting that one doesn't know something.
Actually, there should sometimes be some shame in admitting to being ignorant of some things.

I would characterize an inability to fashion even a guess as to which of the two (a publication in a peer reviewed journal versus a press release) should be given greater credence as pretty shameful.

I am assuming, of course, that you are an adult.
Reply With Quote
  #57  
Old 09-14-2007, 11:27 AM
jshore jshore is offline
Charter Member
 
Join Date: May 2000
Posts: 6,460
I've been trying to stay out of this side discussion about peer-review since it is unclear to me what its purpose even is but I feel the need to weigh in with my view here as someone who has refereed some 75 papers in physics journals and has been an author on about 30.

Peer review is not meant to be a guarantee that a paper is correct. It is merely meant to be some check that there are not glaringly obvious errors, that the work is presented completely and clearly enough to allow people to understand what was done and in principle replicate it, and that its relation to previous work is presented accurately, and that the work itself meets the journal's criteria of being a significant enough advance over previous work.

It is an imperfect filter in that it does allow some bad papers to get through and does reject some good papers. However, in my experience, it works well enough to serve its function, which is to provide such a filter that significantly increases the signal-to-noise ratio above that of the non-peer-reviewed sources and thus allow science to advance. (Note also that peer-review is not supposed to weed out any paper that turns out to be wrong. It is fine to come up with a hypothesis that further work shows is not correct. Peer-review shouldn't be thought of as a stamp of correctness as much as an indication that the ideas and evidence presented are compelling enough, and presented clearly and completely enough, to be worthy of consideration by the scientific community.)

There will certainly be some cases when individual authors will feel "screwed" by the process (e.g., of not having a good paper accepted). Personally, I can identify only one case where I feel we sort of got screwed in this way.

On the flip side, there will again be papers that get in that have no business being there...or at least had glaring errors that should have been found and corrected in the refereeing process. I have twice written comments on papers that were published in very good physics journals (Physical Review Letters and Applied Physics Letters), once submitting it formally to be published (which it was) and once just sending it to the author who then did note my results when he gave a talk on his work at a meeting. In both these cases, the errors were pretty obvious to me on a first read of the paper and left me wondering why they weren't caught in the refereeing process...but such stuff happens and it is not realistic to prevent it completely. (I imagine I have probably let some things through when I have refereed that caused another scientist to wonder, "Who was the idiot who reviewed this paper and let that through?")

Still, while individual authors might get unjustly screwed or unjustly benefitted by such errors in the refereeing process, I don't think it really hurts the progress of science as a whole that much as long as the percentage of these is not too high.

As applied to the current issue at hand, the fact that Singer and Avery have chosen to present their conclusions by press release and a book for the masses rather than publishing in the peer-reviewed literature is a bad sign. And, the fact that just a little bit of investigation on my part has turned up serious problems makes it clear to me why they have chosen to do this...i.e., that their work is garbage. As for the peer-reviewed papers that they reference, one would have to study them all in detail to draw quantitative conclusions but I have already identified a couple papers above that are probably fine papers but don't actually support their conclusions. I know there are a few papers that fall into the category of garbage papers that probably never should have been published. (Many of these are probably published in less prestigious journals that don't turn down many papers or multidisciplinary journals where the journal would have a hard time finding suitable referees really qualified to review it.) And, then there are presumably some papers that, whether they turn out to right or wrong, are legitimate work. (I don't know about the particular paper of Lindzen's that brazil84 linked to above but certainly there are many scientists who would say that Lindzen's papers in support of his "iris hypothesis presented a legitimate enough hypothesis and argument that it was the correct decision to publish them for consideration by fellow scientists, even if they believe that this hypothesis had some serious strikes against it from the start and that subsequent evidence shows that it does not seem to be correct.)
Reply With Quote
  #58  
Old 09-14-2007, 12:03 PM
Slypork Slypork is offline
Guest
 
Join Date: Sep 2005
Thank you all for feedback. I knew the press release was not in any way, shape or form going to stop the AGW debates. I just was hoping for reassurance about the objectivity of scientists in the face of potentially contradictory findings.
Reply With Quote
  #59  
Old 09-14-2007, 09:28 PM
jshore jshore is offline
Charter Member
 
Join Date: May 2000
Posts: 6,460
Just 'cause I'm kind of bored, I decided to look at a few more of the papers in Avery and Singer's list. For example, consider Nicolas Caillon et al., “Timing of Atmospheric CO2 and Antarctic Temperature Changes Across Termination III,” Science 299 (2003): 1728-31. This is a paper with 6 authors and so it adds another 6 scientists to their list.

What this paper discusses is the fact that a careful look at the timing of the rise of CO2 levels at the end of one of the previous ice ages shows that it lagged the start of the Antarctic glacial warming by ~800 years. Presumably, this is thought to cast doubt on the idea that CO2 causes warming rather than that warming causes outgassing of CO2. However, since at least the mid-1970s, it has in fact been understood that the trigger for glaciation and deglaciation is orbital oscillations and that the cooling or warming then triggers changes in CO2 levels that further magnify the effects. In fact, let's look at what the authors say in this paper in regards to its relevance to the current issue of anthropogenic warming:

Quote:
This sequence of events is still in full agreement with the idea that CO2 plays, through its greenhouse effect, a key role in amplifying the initial orbital forcing. First, the 800-year time lag is short in comparison with the total duration of the temperature and CO2 increases (~5000 years). Second, the CO2 increase clearly precedes the Northern Hemisphere deglaciation (Fig. 3)

...

Finally, the situation at Termination III differs from the recent anthropogenic CO2 increase. As recently noted by Kump (38), we should distinguish between internal influences (such as the deglacial CO2 increase) and external influences (such as the anthropogenic CO2 increase) on the climate system. Although the recent CO2 increase has clearly been imposed first, as a result of anthropogenic activities, it naturally takes, at Termination III, some time for CO2 to outgas from the ocean once it starts to react to a climate change that is first felt in the atmosphere. The sequence of events during this Termination is fully consistent with CO2 participating in the latter ~4200 years of the warming. The radiative forcing due to CO2 may serve as an amplifier of initial orbital forcing, which is then further amplified by fast atmospheric feedbacks (39) that are also at work for the present-day and future climate.
Then there is Gerald H. Haug, “Climate and the Collapse of Maya Civilization,” Science 299 (2003): 1731-1735, another 6 author paper. Here is the abstract:

Quote:
In the anoxic Cariaco Basin of the southern Caribbean, the bulk titanium content of undisturbed sediment reflects variations in riverine input and the hydrological cycle over northern tropical South America. A seasonally resolved record of titanium shows that the collapse of Maya civilization in the Terminal Classic Period occurred during an extended regional dry period, punctuated by more intense multiyear droughts centered at approximately 810, 860, and 910 A.D. These new data suggest that a century-scale decline in rainfall put a general strain on resources in the region, which was then exacerbated by abrupt drought events, contributing to the social stresses that led to the Maya demise.
Unlike the previous paper, this paper at least does not directly state any support for the consensus view of AGW...but it doesn't speak against it either. In fact, I am hard-pressed to find any relevance whatsoever for regarding the issue. I guess Avery and Singer would argue that it shows that there were (at least regional) climate variations during this interglacial period. But, I can hardly see how this claim is at all controversial...or in contradiction to the consensus regarding AGW. Its inclusion is frankly just sort of bizarre.
Reply With Quote
  #60  
Old 09-15-2007, 07:06 AM
intention intention is offline
Guest
 
Join Date: Feb 2006
Quote:
Originally Posted by jshore
I've been trying to stay out of this side discussion about peer-review since it is unclear to me what its purpose even is but I feel the need to weigh in with my view here as someone who has refereed some 75 papers in physics journals and has been an author on about 30.

Peer review is not meant to be a guarantee that a paper is correct. It is merely meant to be some check that there are not glaringly obvious errors, that the work is presented completely and clearly enough to allow people to understand what was done and in principle replicate it, and that its relation to previous work is presented accurately, and that the work itself meets the journal's criteria of being a significant enough advance over previous work.

It is an imperfect filter in that it does allow some bad papers to get through and does reject some good papers. However, in my experience, it works well enough to serve its function, which is to provide such a filter that significantly increases the signal-to-noise ratio above that of the non-peer-reviewed sources and thus allow science to advance. (Note also that peer-review is not supposed to weed out any paper that turns out to be wrong. It is fine to come up with a hypothesis that further work shows is not correct. Peer-review shouldn't be thought of as a stamp of correctness as much as an indication that the ideas and evidence presented are compelling enough, and presented clearly and completely enough, to be worthy of consideration by the scientific community.)

There will certainly be some cases when individual authors will feel "screwed" by the process (e.g., of not having a good paper accepted). Personally, I can identify only one case where I feel we sort of got screwed in this way.

On the flip side, there will again be papers that get in that have no business being there...or at least had glaring errors that should have been found and corrected in the refereeing process. I have twice written comments on papers that were published in very good physics journals (Physical Review Letters and Applied Physics Letters), once submitting it formally to be published (which it was) and once just sending it to the author who then did note my results when he gave a talk on his work at a meeting. In both these cases, the errors were pretty obvious to me on a first read of the paper and left me wondering why they weren't caught in the refereeing process...but such stuff happens and it is not realistic to prevent it completely. (I imagine I have probably let some things through when I have refereed that caused another scientist to wonder, "Who was the idiot who reviewed this paper and let that through?")

Still, while individual authors might get unjustly screwed or unjustly benefitted by such errors in the refereeing process, I don't think it really hurts the progress of science as a whole that much as long as the percentage of these is not too high.

...
jshore, as always, an excellent post. I believe, however, that you underestimate the number of erroneous papers that are published by refereed journals:

Quote:
Most Science Studies Appear to Be Tainted By Sloppy Analysis

September 14, 2007; Page B1

We all make mistakes and, if you believe medical scholar John Ioannidis, scientists make more than their fair share. By his calculations, most published research findings are wrong.

Dr. Ioannidis is an epidemiologist who studies research methods at the University of Ioannina School of Medicine in Greece and Tufts University in Medford, Mass. In a series of influential analytical reports, he has documented how, in thousands of peer-reviewed research papers published every year, there may be so much less than meets the eye.

These flawed findings, for the most part, stem not from fraud or formal misconduct, but from more mundane misbehavior: miscalculation, poor study design or self-serving data analysis. "There is an increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims," Dr. Ioannidis said. "A new claim about a research finding is more likely to be false than true."

The hotter the field of research the more likely its published findings should be viewed skeptically, he determined.

...
SOURCE

People often accuse me of thinking that the consensus AGW scientists are liars, or are engaged in some conspiracy. While there are some climate scientists who engage in falsifying or hiding data, results, and code, in the main my thoughts run more along the lines that Ioannidis refers to when he says that the erroneous scientific studies are attributable to "more mundane misbehavior: miscalculation, poor study design or self-serving data analysis."

I also agree with the article when it says "We need to pay more attention to the replication of published scientific results." This is crucial, but when groups like those of us at ClimateAudit engage in that activity, people say things like "oh, similar results are adequate, no need to replicate them exactly", or "forget about that study, we've moved on" ...

Me, I don't believe anything I read in the journals, refereed or not. I've seen too much self-serving garbage published even in the most prestigious journals to think that peer review is any kind of scientific imprimatur. The only test of science is replicability ... and unfortunately, far too many climate science studies have either failed that test, or never been subjected to it.

w.
Reply With Quote
  #61  
Old 09-15-2007, 09:00 AM
jshore jshore is offline
Charter Member
 
Join Date: May 2000
Posts: 6,460
Quote:
Originally Posted by intention
SOURCE

People often accuse me of thinking that the consensus AGW scientists are liars, or are engaged in some conspiracy. While there are some climate scientists who engage in falsifying or hiding data, results, and code, in the main my thoughts run more along the lines that Ioannidis refers to when he says that the erroneous scientific studies are attributable to "more mundane misbehavior: miscalculation, poor study design or self-serving data analysis."

I also agree with the article when it says "We need to pay more attention to the replication of published scientific results." This is crucial, but when groups like those of us at ClimateAudit engage in that activity, people say things like "oh, similar results are adequate, no need to replicate them exactly", or "forget about that study, we've moved on" ...
intention, thanks for the very interesting post. That is an interesting study to be sure. Not sure how applicable it is to the physical sciences though. Despite the fact that the author talks in general terms about "science", all of his examples seem to relate to the sort of purely statistical studies done in medicine that are very sensitive to issues of statistical significance.

It is also interesting that you take that statement on replication to be supporting you folks at ClimateAudit since the author's definition of replication is completely different from yours:

Quote:
As part of the scientific enterprise, we know that replication—the performance of another study statistically confirming the same hypothesis—is the cornerstone of science and replication of findings is very important before any causal inference can be drawn.
So, by replication, they are talking about exactly what I am talking about: Doing a similar study and finding similar results. They are very clearly not talking about taking the data generated from the first study and "auditing" it by re-doing all the statistics calculations that the first study did to be sure that the authors did them correctly. (Again, their focus is on very different sorts of studies than are done for the most part in the physical sciences, although work on temperature reconstructions, which does have a large statistical component, is at least closer to what they are talking about. So, it sounds like they would be more interested in looking at what other studies since Mann have concluded in regards to the reconstructed temperatures rather than in going over Mann's code line-by-line to see what he did.)

Last edited by jshore; 09-15-2007 at 09:04 AM..
Reply With Quote
  #62  
Old 09-15-2007, 04:17 PM
intention intention is offline
Guest
 
Join Date: Feb 2006
Quote:
Originally Posted by jshore
intention, thanks for the very interesting post. That is an interesting study to be sure. Not sure how applicable it is to the physical sciences though. Despite the fact that the author talks in general terms about "science", all of his examples seem to relate to the sort of purely statistical studies done in medicine that are very sensitive to issues of statistical significance.
How are the authors' examples different from the sort of purely statistical studies done in climate science that are very sensitive to issues of statistical significance? The signal we are looking for is so small (hundredths of a degree per year) and the data is so poor and fragmentary, that the overwhelming majority of climate studies contain a huge statistical component -- just like the studies that Ioannidis is discussing. The Hockeystick study (Mann, Baker, Hughes '98) is nothing but statistics. They did not gather data, they did no experiments, there was no field work, it was 100% statistics.

Quote:
Originally Posted by jshore
It is also interesting that you take that statement on replication to be supporting you folks at ClimateAudit since the author's definition of replication is completely different from yours:
Quote:
As part of the scientific enterprise, we know that replication—the performance of another study statistically confirming the same hypothesis—is the cornerstone of science and replication of findings is very important before any causal inference can be drawn.
So, by replication, they are talking about exactly what I am talking about: Doing a similar study and finding similar results. They are very clearly not talking about taking the data generated from the first study and "auditing" it by re-doing all the statistics calculations that the first study did to be sure that the authors did them correctly. (Again, their focus is on very different sorts of studies than are done for the most part in the physical sciences, although work on temperature reconstructions, which does have a large statistical component, is at least closer to what they are talking about. So, it sounds like they would be more interested in looking at what other studies since Mann have concluded in regards to the reconstructed temperatures rather than in going over Mann's code line-by-line to see what he did.)
So you've "moved on" from the Mann study as well?

The truth is that both types of replication are necessary -- replicating the original procedures exactly is as important as replicating the procedure with similar data. You seem to believe that the first step in science is to perform an entirely new study which is "similar" to the original one, to see if you get "similar" answers.

In fact, that's the second step. The first step is to see if the original study contains any mistakes, by answering questions like:
• Did the authors use the right data?
• Did they have ex ante data selection criteria?
• Did they use the correct mathematical approach?
• Did they perform the selected mathematics correctly?
• Did they report back all findings, adverse as well as supportive?
• Did they calculate the error bars and confidence intervals for their results?
• Have they revealed all of their methods and code, so that a "similar" study can even be performed?
• Did they follow their own procedures as listed in their study?
• Does following their procedures lead to the results that they claim?
Only when we know that the first study is theoretically and practically sound, only when we can do what the authors did and get the answers that the authors got, can we move on to looking at "similar" studies. There is no point in doing "another study statistically confirming the same hypothesis" until we see if the hypothesis is statistically confirmed per the authors' claims in the first study. In fact, we can't even do a "similar study" to what Mann did until we know exactly what Mann did do ... which was tough, since he refused to reveal how he did it ...

Otherwise, we just end up with a whole string of studies, for example, that depend on bristlecone pines as temperature proxies ... sound familiar?

Yes, the other bristlecone based studies are "similar" to the Hockeystick, just like you want, and yes, they give "similar" results, which seems to be your gold standard ... but it is only because they are similarly flawed, and are based on similarly poor statisitics and/or similarly biased data.

Again, jshore, as with your defense of not testing the climate models, I find your defense of not auditing and exactly replicating the climate studies to be absolutely incomprehensible. What kind of "scientist" doesn't want to test the climate models, and to verify that the climate studies are correct by replicating their results, before moving on to "similar" studies? What kind of "science" do they practice in your neck of the woods, where things don't get tested and verified before moving on?

In the Hockeystick study, Mann made an egregious statistical error, a real n00bie blunder, because as Mann himself said, "I am not a statistician". The result of that error is that his method "mines" for hockeysticks, finding them in purely random red noise. Unfortunately, Mann used his position in the IPCC to splash his blunder all over the world, making the Hockeystick the icon of the AGW movement.

And he got away with it precisely because you, jshore, and others like you, seem to think that it is unimportant to make sure that the first experiment was performed correctly before moving on to the next, "similar" experiment.

w.
Reply With Quote
  #63  
Old 09-15-2007, 05:44 PM
GIGObuster GIGObuster is online now
Guest
 
Join Date: Jul 2001
Quote:
Originally Posted by intention
Again, jshore, as with your defense of not testing the climate models, I find your defense of not auditing and exactly replicating the climate studies to be absolutely incomprehensible. What kind of "scientist" doesn't want to test the climate models, and to verify that the climate studies are correct by replicating their results, before moving on to "similar" studies? What kind of "science" do they practice in your neck of the woods, where things don't get tested and verified before moving on?

In the Hockeystick study, Mann made an egregious statistical error, a real n00bie blunder, because as Mann himself said, "I am not a statistician". The result of that error is that his method "mines" for hockeysticks, finding them in purely random red noise. Unfortunately, Mann used his position in the IPCC to splash his blunder all over the world, making the Hockeystick the icon of the AGW movement.

And he got away with it precisely because you, jshore, and others like you, seem to think that it is unimportant to make sure that the first experiment was performed correctly before moving on to the next, "similar" experiment.

w.
He got away with it? Like if no one took a look. or that there were no meddling kids...

Quote:
Academy affirms hockey-stick graph
But it criticizes the way the controversial climate result was used.
Geoff Brumfiel

Washington DC - It's probably the most politicized graph in science — an icon of the case for climate change to some, and of flawed science in the service of that case to others — and it has coloured the climate-change debate for nearly a decade. Now the US National Academy of Sciences (NAS) has weighed in with a report on the 'hockey-stick' plot, which it hopes will finally lay the controversy to rest.

The graph purports to chart global temperatures over the past millennium; a sharp rise at the current end is the 'blade' that makes the otherwise flattish line look like a hockey stick. Climate groups have claimed it as evidence of dangerous global warming; sceptics, especially in the United States and Canada, have questioned the study's merit and statistical methodology.

...


"We roughly agree with the substance of their findings," says Gerald North, the committee's chair and a climate scientist at Texas A&M University in College Station. In particular, he says, the committee has a "high level of confidence" that the second half of the twentieth century was warmer than any other period in the past four centuries. But, he adds, claims for the earlier period covered by the study, from AD 900 to 1600, are less certain. This earlier period is particularly important because global-warming sceptics claim that the current warming trend is a rebound from a 'little ice age' around 1600. Overall, the committee thought the temperature reconstructions from that era had only a two-to-one chance of being right. *

...

Mann says that he is "very happy" with the committee's findings, and agrees with the core assertion that more must be done to reduce uncertainties in earlier periods. "We have very little long-term information on the Southern Hemisphere and large parts of the ocean," he says. As for the report's effect on the policy debate, Mann says: "Hopefully this is the beginning of us, as a community, putting that silliness behind us."
*One has to realize this undermines also the ones proposing that the rebound included a period that was hotter or as hot as we are experiencing now.
Reply With Quote
  #64  
Old 09-15-2007, 07:15 PM
Public Animal No. 9 Public Animal No. 9 is offline
Guest
 
Join Date: May 2007
In reading through the posts here, there seems to be a general misunderstanding of how research actually gets done and published. Concerning the replication issue raised by intention, for better or worse, replicating the precise study that someone else has actually done isn't going to get published. But good researchers will make sure that they can replicate, to the extent possible, those previously published results before moving on to any extension of that work. I say "to the extent possible" because it is indeed too often the case that the exact data set used by the original researchers is not itself published or made widely available. However, that can be a strength rather than a weakness. When different researchers independently reach the same general conclusion using different methods or even different data, then that lends weight to the conclusion. When that happens again and again, then the scientific community reaches a point where they say there is no more real value to more of the same - that conclusion has been generally accepted.

That's not to say that there are not contradictory conclusions that are published. Contradictions in general are what get papers published, not suppressed. A reviewer will look at a paper that says the same old thing and recommend not publishing because it's the same old thing. But a paper that identifies a discrepancy or a contradiction can point to flaws in the generally accepted understanding or a previously unconsidered mechanism. Those are the interesting ones, and it's no surprise that the climate literature is full of them. But these contradictions and discrepancies tend to be in the higher-order details, not in the primary theory. This is the type of debate that we see in the press about whether global warming has led to a greater number or intensity (or both) of tropical storms. There is still a lot of uncertainty about this particular issue, but much, much less uncertainty about the question of anthropogenically-driven global climate change. Uncertainty about the details does not imply uncertainty about the primary issue.

There were also questions raised about whether two peer-reviewed, published studies on the same topic could reach different and contradictory conclusions. Of course that can happen, but it does not mean that one of the two papers was based on falsified data. Different methods are often used to evaluate the available data, and there are always assumptions that are made that may be wrong, leading to different results. There are also instances where researchers reach conclusions that are broader than the data would support, even though the results can make it through the review process (as well described by jshore). In these cases, advocates tend to jump on published conclusions that support their position, either by pointing to the conclusion or to the fact that the conclusions are in error. But it needs to be pointed out that poor research is a far cry from falsification.

Regarding the discussion about why scientists might do this, people need to understand that the real currency of a researcher is reputation, not money. Although reputation can be tied to grant funding, it's more closely tied to being right and being in the lead. A researcher that could conclusively demonstrate that global warming was a completely natural phenomenon that had nothing to do with emissions of greenhouse gases would seal his reputation as a great scientist for the ages. Even if old-timers would find it hard to publish something that contradicted their entire life's work prior to that finding, the scientific community is full of up-and-coming researchers whose career goal is to make a name for themselves. In the completely open information exchange we have these days, there is no way that results of that magnitude could be suppressed.
Reply With Quote
  #65  
Old 09-16-2007, 08:58 PM
jshore jshore is offline
Charter Member
 
Join Date: May 2000
Posts: 6,460
Quote:
Originally Posted by intention
How are the authors' examples different from the sort of purely statistical studies done in climate science that are very sensitive to issues of statistical significance? The signal we are looking for is so small (hundredths of a degree per year) and the data is so poor and fragmentary, that the overwhelming majority of climate studies contain a huge statistical component -- just like the studies that Ioannidis is discussing. The Hockeystick study (Mann, Baker, Hughes '98) is nothing but statistics. They did not gather data, they did no experiments, there was no field work, it was 100% statistics.
The fact that they did not gather data or do experiments is irrelevant. I am talking about the basis of their studies. In medicine, studies are purely statistical...e.g., if you are trying to figure out if there is a correlation between heart attacks and several factors like obesity, lack of exercise, high fat diet, ..., you do a study and look whether correlations exist.

As near as I understand it without investing a large amount of time, one point of that paper the Wall Street Journal wrote about is that if you actually look at 20 possible different factors, then purely by chance an average of one of them will correlated with heart attacks at a 95% confidence level. I don't see how such concerns apply to the climate science field.


Quote:
The truth is that both types of replication are necessary -- replicating the original procedures exactly is as important as replicating the procedure with similar data. You seem to believe that the first step in science is to perform an entirely new study which is "similar" to the original one, to see if you get "similar" answers.

In fact, that's the second step. The first step is to see if the original study contains any mistakes, by answering questions like:
• Did the authors use the right data?
• Did they have ex ante data selection criteria?
• Did they use the correct mathematical approach?
• Did they perform the selected mathematics correctly?
• Did they report back all findings, adverse as well as supportive?
• Did they calculate the error bars and confidence intervals for their results?
• Have they revealed all of their methods and code, so that a "similar" study can even be performed?
• Did they follow their own procedures as listed in their study?
• Does following their procedures lead to the results that they claim?
In practice, science rarely proceeds this way. To some degree, a referee will consider some of these questions but certainly not at the level of detail that you are talking about. And, if two studies are in stark conflict, then some people (perhaps the authors of one or the other of the studies) may be motivated to go back and try to look in somewhat more detail at these issues.

So, what you are proposing is essentially a change in scientific procedure.

Quote:
In fact, we can't even do a "similar study" to what Mann did until we know exactly what Mann did do ... which was tough, since he refused to reveal how he did it ...
No, he just refused to provide his computer code, just as most scientists do not release their code.

Quote:
Yes, the other bristlecone based studies are "similar" to the Hockeystick, just like you want, and yes, they give "similar" results, which seems to be your gold standard ... but it is only because they are similarly flawed, and are based on similarly poor statisitics and/or similarly biased data.
Well, it seems like your conclusion that everyone is doing things wrong is largely because they are getting a result that you don't like.

Quote:
In the Hockeystick study, Mann made an egregious statistical error, a real n00bie blunder, because as Mann himself said, "I am not a statistician". The result of that error is that his method "mines" for hockeysticks, finding them in purely random red noise. Unfortunately, Mann used his position in the IPCC to splash his blunder all over the world, making the Hockeystick the icon of the AGW movement.
And, the National Academy of Sciences study clearly concluded that this defect in the method turned out to be irrelevant:

Quote:
As part of their statistical methods, Mann et al. used a type of principal component analysis that tends to bias the shape of the reconstructions. A description of this effect is given in Chapter 9. In practice, this method, though not recommended, does not appear to unduly influence reconstructions of hemispheric mean temperature; reconstructions performed without using principal component analysis are qualitatively similar to the original curves presented by Mann et al. (Crowley and Lowery 2000, Huybers 2005, D’Arrigo et al. 2006, Hegerl et al. 2006, Wahl and Ammann in press).
This is not too surprising since I would imagine that Mann actually did it a few different ways and then wrote up for publication the one that he felt was best (perhaps because it seemed most formally rigorous). He is not the first one to do pathbreaking work that contains some errors but nonetheless gets pretty much the same answer one gets if one does it without the questionable methods.

I admit that there are still some unresolved issues in regards to temperature reconstructions, e.g., regarding the proxy data and the like. Such reconstructions are difficult and imperfect to be sure, as the NAS report notes, e.g., regarding the strong dependence "on data from the Great Basin region in the western United States." Issues of the dependence of the result on certain aspects of the data were addressed by Mann et al. themselves in this paper, one year after their Nature paper was published.
Reply With Quote
  #66  
Old 09-17-2007, 12:54 AM
intention intention is offline
Guest
 
Join Date: Feb 2006
jshore, I said:

Quote:
Yes, the other bristlecone based studies are "similar" to the Hockeystick, just like you want, and yes, they give "similar" results, which seems to be your gold standard ... but it is only because they are similarly flawed, and are based on similarly poor statisitics and/or similarly biased data.
And you replied:

Quote:
Well, it seems like your conclusion that everyone is doing things wrong is largely because they are getting a result that you don't like.
No. I go by the facts. The NAS panel was clear on some things. For one, they agreed that the Hockeystick was dependent in its entirety on one dataset, the strip-bark (bristlecone) pines. And further, the NAS panel agreed that strip-bark (bristlecone) pines should not be used in tree ring paleoclimate reconstructions.

But then they went ahead and showed other reconstructions in their report that were similar to the Hockeystick that depended on the very datasets whose use they had just condemned (strip-bark/bristlecone pines). Some of the reconstructions they showed actually used the exact Mann PC1 data that they agreed was the cause of the "hockeystick" shape.

This kind of politically inspired "compromise science" is another reason why I don't trust things like the NAS panel ... the report was almost clinically schizophrenic, condemning something with one hand and praising the same thing with the other. For example, the Panel recommended that the Durbin-Watson statistic be used to gauge the validity of the various reconstructions ... then approvingly spoke of reconstructions that failed that very test. I can give more examples of this in their report, there's plenty of them.

So no, I make my decisions based on science. Sometimes I don't like the results, but if they're valid, they're valid, regardless of my likes and dislikes.

Regarding the reconstructions that supposedly "replicate" the Hockeystick, I could go into almost endless detail regarding the exact mistakes of proxy selection, incorrect choice of statistics, lack of ex ante methods, failure to use a verification period, use of "grey" data versions, lack of archiving of data, minimization of error bars, and a host of other very technical issues if you'd like, but the short version is that the reconstructions that claim to "replicate" Mann's work are riddled with inaccuracies.

Please get your facts straight before making this kind of accusation. Your response made absolutely no attempt to see if what I said was true. It was simply a distasteful personal attack, with no citations and no data, claiming that I made up my mind based on what I "like" rather than on the facts. I can assure you that I have spent hundreds of hours researching and writing about this very issue.

Surely at this point you know me better than that, jshore. I do my homework before making any claims ... and I would encourage you to do the same.

w.
Reply With Quote
  #67  
Old 09-17-2007, 01:48 AM
Blake Blake is offline
Member
 
Join Date: Mar 2001
Posts: 10,207
Quote:
Originally Posted by jshore
As near as I understand it without investing a large amount of time, one point of that paper the Wall Street Journal wrote about is that if you actually look at 20 possible different factors, then purely by chance an average of one of them will correlated with heart attacks at a 95% confidence level. I don't see how such concerns apply to the climate science field.
I've explained exactly why in previous threads on this topic, and I know that you read those explanations. Nonetheless I am glad to see that you accepted this criticism as valid in your own words .

The last few centurys' warming event is a self-selected sample. We are studying it in-depth purely and entirely because it was so dramatic that it drew itself to our attention. It is no different to all those reports of cancer clusters in schools or office buildings, or the similarities between the lives of Kennedy and Lincoln, or any of the billions of other self-selected samples that fill the tabloids.

The problem with self-selected samples of this sort is that we have absolutely no idea how large our sample space is. We noticed global temperatures because they are undergoing a dramatic change. But if it hadn't been global temperatures then it could have been encroachment of woody plants into grasslands, or the sex ratios of crocodiles, or the incidence of hurricanes, or the frequency of La Nina events or the incidence of red tides or the severity of frost damage on snowpeas. It could have been any of those or literally thousands of other possible factors that have been or mechanistically could be attributed to changes in carbon dioxide levels.

Which brings us back to what you just said: if you actually look at 20 possible different factors, then purely by chance an average of one of them will correlated with increases in carbon dioxide at a 95% confidence level.

Yet our sample space is far, far larger than just 20 different factors. With just a little thought I could make a list of literally thousands of factors that plausibly could be or actually have been mechanistically blamed on increases in CO2 levels.

The only reason that we have spect so much time investigating the correlation between CO2 and temparature is because we already knew that temperatures were increasing steadily. But if temperatures hadn't been increasing steadily then we wouldn't have investigated it to the extent that we have. And if the sex ratios of crocodiles or the incidence of hurricanes had been ioncraesing steadily we woudl have spent much more time invetsigating those factors and, surprise surprise, those factors would have been 95% correlated with changes in atmospheric CO2.

This is the problem with self-selected samples of this type. Our sample space is infinitely large, far higher than 20 samples, yet you yourself admit that if you actually look at just 20 possible different factors, then purely by chance an average of one of them will correlated with increases in carbon dioxide at a 95% confidence level. In this reality, at this time, the factor that happens to correlate is rising temperature. In another reality it could have been the increase in woody plant density in grasslands or the sex ratios of alligators, but we dont get to see these. We only see the factor that, purely by chance, correlates in this reality.

And as you yourself admitted, this is just a correltaion, It isn;t cause and efect, it;s pure chance correlation because if you look at just 20 possible different factors, then purely by chance an average of one of them will correlated with increases in carbon dioxide at a 95% confidence level. In this it's temperture that correlates.

And this is why the science needs to be exceptionally rigorous, and this is why anything less than a 95% correlation is meaningless.

And this is why the article that Intention quoted is highly relevant to the debate at hand.

And this is why it is disappointing that such a vocal proponent of the correlation doesn't see how such concerns apply to the climate science field.
Reply With Quote
  #68  
Old 09-17-2007, 03:08 AM
intention intention is offline
Guest
 
Join Date: Feb 2006
Quote:
Originally Posted by jshore
As near as I understand it without investing a large amount of time, one point of that paper the Wall Street Journal wrote about is that if you actually look at 20 possible different factors, then purely by chance an average of one of them will correlated with heart attacks at a 95% confidence level. I don't see how such concerns apply to the climate science field.
Blake has replied to this far more clearly than I could, many thanks. I'd just like to add a couple of points.

One is that a central tenet of science involves the calculation of standard errors, or confidence intervals. This is intimately related to the "one chance in 20" that jshore and Blake spoke of. Far too many "scientific" climate studies either ignore these entirely, or underestimate them greatly. Mann's Hockeystick is a perfect example, as his error bars can be shown mathematically to be far too narrow.

The other is that much of climate science involves what are called "mathematical transformations". In these, a dataset of some kind is subjected to a variety of mathematical operations, which results in a new "transformed" dataset.

Some examples will help understand the concept. We take a dataset consisting of the strength of microwaves as seen by a satellite, and transform them into a temperature record of the atmosphere.

We take a dataset of tree rings, and transform them into an estimate of historical droughts and rainy periods.

We take a dataset of ground station temperatures, and transform them into a gridded global temperature dataset.

Now, before these mathematical transformations can be accepted scientifically, we need to be able to examine:
1) The original data.

2) The exact mathematical operations performed.

3) The final result.
Otherwise, we have no way of knowing whether the study has any validity.

Unfortunately, in climate science there have been far too many examples of scientists refusing to reveal some or all of the data and the operations. Many of the paleoclimate reconstructions depend on data which has never been archived. Thompson's Guliya ice-core data, for example, is a staple of the reconstructions which show no Medieval Warm Period ... but Thompson has refused to archive the data. Similarly, Phil Jones has refused to reveal which temperature stations he used for the HadCRUT3 data set, despite my Freedom of Information Act request for the data. And this is just the tip of the iceberg.

This blind acceptance of some scientist's claim that "X allows me to reconstruct a thousand years of climate" reached its peak with the Hockeystick. Until Steve McIntyre tried to actually unravel what Mann had done, it was accepted world-wide as good science ... despite the fact that it was fatally flawed, as the NAS agreed, by the facts that:
• Early segments of the MBH reconstruction fail verification significance tests, a finding later confirmed by Wahl and Ammann and accepted by the NAS Panel.

• Far from being “robust” to the presence or absence of all dendroclimatic indicators as Mann had claimed, McIntyre showed that results vanished just by removing the controversial bristlecones, a result also confirmed by Wahl and Ammann and noted by the NAS Panel.

• McIntyre showed that the PC method yielded biased trends because it was calculated incorrectly, an effect confirmed by the NAS and Wegman panels.

• McIntyre showed that pivotal PC1 was not a valid temperature proxy due to non-climatic contamination in the dominant-weighted proxies (bristlecones, foxtails). Here again the NAS panel concurred, saying that strip-bark bristlecones should not be used in climate reconstructions.
Mann fought like crazy to avoid revealing any of this, because he knew what he had done. He knew that he had calculated the R^2 statistic, and that it showed that his results were not significant, so he claimed that he had never calculated it ... but when his code was revealed, there it was, he had calculated it. He knew that the results were not robust, he had calculated that as well, and put the unwanted results in a folder called "CENSORED" ...

And that kind of thing is why transparency is so important in science. Without transparency, anyone can make any kind of mistaken, false, or fraudulent claim and never have it come to light. Now, most climate scientists are not involved in fraudulent science like that. But most climate scientists are also not trained statisticians. They often use what they call "novel" statistical methods to obtain their results. Again, without transparency of data and procedures, these "novel" methods cannot be subjected to a proper statistical analysis.

Finally, to close the circle, many of the methods used to assign the 95% confidence intervals are simply incorrect. Climate statistics (temporal records of rainfall, humidity, temperature, etc.) are known to be non-gaussian, non-stationary, and subject to both short- and long-range autocorrelation. Because of this, normal statistical methods do not apply to these datasets, special methods must be used. Far too many climate scientists either do not know this or ignore this. As a result, their error estimates can be off by orders of magnitude.

The tragic reality is that climate science has become so politicized that we cannot trust what anyone says, on either side of the discussion. Whether through misunderstanding, mistake, mischance, or mischief, far too many climate studies are fatally flawed. That is why transparency is not an option, but a requirement, if we are ever to unravel this most complex question that we call climate.

Regards to all,

w.
Reply With Quote
  #69  
Old 09-17-2007, 12:33 PM
jshore jshore is offline
Charter Member
 
Join Date: May 2000
Posts: 6,460
Quote:
Originally Posted by intention
No. I go by the facts. The NAS panel was clear on some things. For one, they agreed that the Hockeystick was dependent in its entirety on one dataset, the strip-bark (bristlecone) pines. And further, the NAS panel agreed that strip-bark (bristlecone) pines should not be used in tree ring paleoclimate reconstructions.

But then they went ahead and showed other reconstructions in their report that were similar to the Hockeystick that depended on the very datasets whose use they had just condemned (strip-bark/bristlecone pines). Some of the reconstructions they showed actually used the exact Mann PC1 data that they agreed was the cause of the "hockeystick" shape.

This kind of politically inspired "compromise science" is another reason why I don't trust things like the NAS panel ... the report was almost clinically schizophrenic, condemning something with one hand and praising the same thing with the other. For example, the Panel recommended that the Durbin-Watson statistic be used to gauge the validity of the various reconstructions ... then approvingly spoke of reconstructions that failed that very test. I can give more examples of this in their report, there's plenty of them.
I think your discussion of the NAS report is quite revealing. It seems to me that when you see something that doesn't make sense to you, you quite quickly jump to the conclusion that you are right and they are wrong. So, you conclude that the NAS report is "almost clinicially schizophrenic". In fact, I (and presumably the authors of the report themselves) see it much differently than you. For example,
I don't think that they say that the Mann method caused the hockeystick shape. What they say is what I quoted above, namely that the method used by Mann is not a recommended one because one can create artificial data sets where it behaves badly but that, in actual fact, it turned out that it "does not appear to unduly influence reconstructions of hemispheric mean temperature". Also, with respect to certain kinds of proxies, yes, they see some potential problems with the strip-bark pines, but that does not mean that they think any reconstruction that contains them is garbage. At any give time in any scientific field, there are "issues", i.e., things that one can identify as most in need of improvement. However, that does not mean that every result in the field is garbage. As practicing scientists, the members of the NAS panel know that it is not black-and-white, either-or. So, while they feel that these issues limit their confidence in conclusions regarding temperatures before 1600 A.D., they do not argue that we know nothing whatsoever about this.

Quote:
So no, I make my decisions based on science. Sometimes I don't like the results, but if they're valid, they're valid, regardless of my likes and dislikes.
Sorry...but I think everyone has their biases. I do and you do. And, in fact, in my discussions with you, I think it has been quite clear to me anyway that your biases strongly influence your opinions about certain papers in the literature, some (like that paper by global temperature by McKitrick) that you seem to accept quite uncritically and others which you go through with a fine-toothed comb or refuse to even take the time to understand the arguments in (like the Santer et al. paper about the discrepancies between temperature datasets in the tropics).

That is why one can't trust the conclusions of any one scientist, including yourself, and must look at the general view in the field.

Quote:
Please get your facts straight before making this kind of accusation. Your response made absolutely no attempt to see if what I said was true. It was simply a distasteful personal attack, with no citations and no data, claiming that I made up my mind based on what I "like" rather than on the facts. I can assure you that I have spent hundreds of hours researching and writing about this very issue.

Surely at this point you know me better than that, jshore. I do my homework before making any claims ... and I would encourage you to do the same.
I am sorry that you took offense to what I said. I didn't mean to imply that you haven't done your homework. I think it is clear to all of us that you have spent a great deal of time investigating stuff in this field. However, this doesn't mean you do not have your own strong biases that influence your conclusions. I think this might be further magnified by the fact that, unlike many of us scientists, you have (as I understand it) received your scientific training in this one field where you and others (on both sides) have very strong biases and prejudices. I.e., you haven't had the experience of doing science in a field where you weren't in the position of strongly advocating for one side or the other in a pretty polarized debate.

I do admire the intelligence and diligence that you bring to your studying of these issues in climate science. However, please understand that I do remain skeptical when you reach conclusions quite at odds with most of the other scientists working in the field.
Reply With Quote
  #70  
Old 09-17-2007, 01:16 PM
Hentor the Barbarian Hentor the Barbarian is offline
Guest
 
Join Date: Jun 2002
Quote:
Originally Posted by Blake
The last few centurys' warming event is a self-selected sample. We are studying it in-depth purely and entirely because it was so dramatic that it drew itself to our attention. It is no different to all those reports of cancer clusters in schools or office buildings, or the similarities between the lives of Kennedy and Lincoln, or any of the billions of other self-selected samples that fill the tabloids.
A good deal of what you write in this post makes sense. Some of it doesn't. What do you mean by "self-selected"? Usually when we say that in the social sciences, we mean that there was some mechanism by which the respondents in the data set identified themselves. A clinical sample, people responding to an ad, people showing up to vote, all of those will differ from the population as a whole because they did something to select themselves.

How is it that our global temperatures have distinguished themselves from other global temperatures? They are what they are. If they've gotten hotter, there must be some mechanism, but to say that they are self-selected suggests some sort of agency on the part of temperature, and some host of other, contemporaneous, temperatures that they've distinguished themselves from (other than by simply changing over time).
Quote:
The problem with self-selected samples of this sort is that we have absolutely no idea how large our sample space is. We noticed global temperatures because they are undergoing a dramatic change. But if it hadn't been global temperatures then it could have been encroachment of woody plants into grasslands, or the sex ratios of crocodiles, or the incidence of hurricanes, or the frequency of La Nina events or the incidence of red tides or the severity of frost damage on snowpeas. It could have been any of those or literally thousands of other possible factors that have been or mechanistically could be attributed to changes in carbon dioxide levels.
True, but I guess I don't see this as relevant to your point. It sounds like you are pointing out a large number of possible other domains to consider for concurrent changes, but this doesn't get at your underlying point about the meaning of the 95% confidence interval.
Quote:
Which brings us back to what you just said: if you actually look at 20 possible different factors, then purely by chance an average of one of them will correlated with increases in carbon dioxide at a 95% confidence level.
This is why there are corrections to be employed for multiple comparisions. Even if you look at one comparison in one data set, adopting the alpha level of .05 means that you are accepting an error rate of 5 in 100. Put another way, you're saying that you'd expect to see differences of a given size in variance between groups arise simply by chance (rather than by the mechanism of the study) fewer than 5 times out of 100. That's why replication with different data sets is important. Once you show signficant differences on a given factor in two different data sets, your confidence in the robustness of the relationship goes up.
Quote:
Yet our sample space is far, far larger than just 20 different factors. With just a little thought I could make a list of literally thousands of factors that plausibly could be or actually have been mechanistically blamed on increases in CO2 levels.
I don't understand this, or it's relevance here. Are you saying that the multiple comparisons concern holds for the theoretical number of comparisons that could be made? That can't be it, becuase that doesn't make sense.[quote]The only reason that we have spect so much time investigating the correlation between CO2 and temparature is because we already knew that temperatures were increasing steadily. But if temperatures hadn't been increasing steadily then we wouldn't have investigated it to the extent that we have.[quote]Yes. It's an observed phenomenon. That's irrelevant to the issue of multiple comparisons or the acceptance of a false positive rate of 5 times in 100. That's like saying that there is something suspect about studying what influences depression, or crime rates, or cancer deaths, just because your attention was drawn to them. What else is science about but trying to explain observed phenomena?
Quote:
This is the problem with self-selected samples of this type. Our sample space is infinitely large, far higher than 20 samples, yet you yourself admit that if you actually look at just 20 possible different factors, then purely by chance an average of one of them will correlated with increases in carbon dioxide at a 95% confidence level.
Again, it means that using the statistical distributions we do, we would expect observed differences of a given number or greater to arise purely by chance 5 times in 100. Meaning that any single given difference could fall in that five percent range, and that if you run 20 comparisons within your data set, you are increasing the risk of calling something signficantly different due to your explanatory variables when it was really due to chance. This does not mean that the multiple comparisons problem extends across data sets.
Quote:
And this is why the science needs to be exceptionally rigorous, and this is why anything less than a 95% correlation is meaningless.
Hopefully this is a typo, because there is no such standard as accepting only a 95% correlation (whatever a 95% correlation would even mean). I'm hard pressed, however, to figure out what you might have meant otherwise.

Hopefully you can clarify some of this, because as it stands, it is pretty confusing, and seemingly erroneous.

Last edited by Hentor the Barbarian; 09-17-2007 at 01:18 PM..
Reply With Quote
  #71  
Old 09-17-2007, 02:36 PM
jshore jshore is offline
Charter Member
 
Join Date: May 2000
Posts: 6,460
Quote:
Originally Posted by Blake
The last few centurys' warming event is a self-selected sample. We are studying it in-depth purely and entirely because it was so dramatic that it drew itself to our attention. It is no different to all those reports of cancer clusters in schools or office buildings, or the similarities between the lives of Kennedy and Lincoln, or any of the billions of other self-selected samples that fill the tabloids.

The problem with self-selected samples of this sort is that we have absolutely no idea how large our sample space is. We noticed global temperatures because they are undergoing a dramatic change. But if it hadn't been global temperatures then it could have been encroachment of woody plants into grasslands, or the sex ratios of crocodiles, or the incidence of hurricanes, or the frequency of La Nina events or the incidence of red tides or the severity of frost damage on snowpeas. It could have been any of those or literally thousands of other possible factors that have been or mechanistically could be attributed to changes in carbon dioxide levels.
These two paragraphs (and the rest of what follows in your post) are completely wrong on the history of the theory. That is not how it happened at all. Global temperatures weren't self-selected because they did something dramatic. In fact, when Arrhenius first calculated what sort of warming a doubling of CO2 might cause, it was a theoretical exercise, not one based on any belief that such a rise had yet even begun to occur...but that it might eventually if we kept burning fossil fuels. And, when James Hansen was before Congress in 1988 and argued that the signal due to greenhouse warming had emerged from the noise, many scientists were dubious. After all, the temperature...after having risen during the first part of the century had remained steady or even dropped a bit during the middle part of the century and had only started rising again in the 1970s. Many scientists at that time didn't yet think it was a significant trend at all. And, of course, on the basis of the AGW theory, Hansen made the prediction that this rise would continue, a prediction that turned out to be correct.

Your whole post entirely neglects the fact that the evidence for AGW is not primarily simple statistical correlation of the sort considered, e.g., in the medical sciences. It is not searching in the dark for correlations. Rather, it is based on mechanistic understanding. The evidence from temperature reconstructions of the last millenia or so is but one piece of evidence (and certainly the piece that is most circumstantial and statistical in nature, as well as suffering from real data quality issues as intention has correctly noted even if overstating); however, even for it, I think you are on pretty weak ground to be arguing that people could have found any number of other factors to focus on as anomalous. Global temperature was the obvious factor to focus on based on our mechanistic understanding of what CO2 ought to do. Now that climate models have advanced to the point where more complex questions can be considered, there is more emphasis on starting to investigate what other effects this will have on climate, e.g., extreme events like droughts and floods and heatwaves and hurricanes. However, the emphasis was initially focussed on global temperature because the understanding of the greenhouse effect made it the obvious thing to consider.

Last edited by jshore; 09-17-2007 at 02:40 PM..
Reply With Quote
  #72  
Old 09-19-2007, 11:53 AM
Hentor the Barbarian Hentor the Barbarian is offline
Guest
 
Join Date: Jun 2002
Before this drops away entirely, I'd just like to say that I was hoping Blake or intention could come back to respond to my post and help to clear up some of my confusion about how the concerns about multiple comparisons really are problematic here.

Just a note about intention's issue of transformations - what I typically think of when the terms "transformation" is used in regards to data is just a fairly simple and straightforward transformation to the data itself. Taking the square, log or square root, or adding a constant - such transformations are common, and are not at all a threat to the integrity of the analyses. They can complicate the interpretation of regression parameters, perhaps, but there is nothing unsound about them. They aren't gaming the analyses in any way.

The examples he gives sound more like operationalizations. Like saying, "we'll call temperature the amount indicated by the height of a column of mercury." Whether any concerns arise from the examples he gives is not my domain, but I just wanted to clarify any confusion that might arise associated with what I would think of as mathematical transformations.
Reply With Quote
  #73  
Old 09-19-2007, 08:34 PM
intention intention is offline
Guest
 
Join Date: Feb 2006
Quote:
Originally Posted by Hentor the Barbarian
Before this drops away entirely, I'd just like to say that I was hoping Blake or intention could come back to respond to my post and help to clear up some of my confusion about how the concerns about multiple comparisons really are problematic here.

Just a note about intention's issue of transformations - what I typically think of when the terms "transformation" is used in regards to data is just a fairly simple and straightforward transformation to the data itself. Taking the square, log or square root, or adding a constant - such transformations are common, and are not at all a threat to the integrity of the analyses. They can complicate the interpretation of regression parameters, perhaps, but there is nothing unsound about them. They aren't gaming the analyses in any way.

The examples he gives sound more like operationalizations. Like saying, "we'll call temperature the amount indicated by the height of a column of mercury." Whether any concerns arise from the examples he gives is not my domain, but I just wanted to clarify any confusion that might arise associated with what I would think of as mathematical transformations.
Hentor, thanks for the post. To answer the last question first, "mathematical transformation" is a general term for transforming one dataset into another, using one or more of a huge variety of mathematical operations (linear algebra, matrix algebra, logarithms, Kalman filtering, wavelets, differencing, etc. ad infinitum). A quick Google search, for example, finds the term applied in ways such as

Quote:
Originally Posted by GOOGLE
... A mathematical transformation of multi-angular remote sensing data for the study of vegetation change. ...

... Estimation of central aortic pressure waveform by mathematical transformation of radial tonometry pressure ...

... Heart sounds--a mathematical transformation of blood pressure? ...
Note that this is exactly the sense in which I used it, that of mathematically transforming a dataset in one domain (tonometry, remote sensing data, blood pressure, tree ring width) into another domain (Heart sounds, central aortic pressure, temperature, vegetation change).

While "operationalization" has a similar meaning, it seems to be used mostly in the social sciences. "Transformation", on the other hand, is used in signal processing. This is what we are doing in climate science, as evidenced by the use of the term above regarding satellite data and vegetation growth. I am using the more common term in the context of climate science when I refer to this as "transformation".

Next, you said that you wished Blake or I could "clear up some of [your] confusion about how the concerns about multiple comparisons really are problematic here."

I fear that I am the one who is confused now. Perhaps you could restate your concerns, so that I could answer them directly.

All the best,

w.
Reply With Quote
  #74  
Old 09-20-2007, 08:13 AM
Hentor the Barbarian Hentor the Barbarian is offline
Guest
 
Join Date: Jun 2002
Okay, in simple summary:

1) Any constant transformation (regardless of how many you want to list) is not a statistical problem, so I don't see the concern.

2) How can a property, such as temperature, have any sort of agency to select itself, a ham sandwich for lunch, or Al Gore for president?

3) What relevance does the concern about Type I error in inferential statistics have for this discussion?

a) What is a "95% correlation", and since when did it become regarded as any sort of lowest threshold for scientific "meaning"?

b) Is there any evidence in support of the common acceptance of 95% correlation for meaningfulness?
Reply With Quote
  #75  
Old 09-20-2007, 06:21 PM
intention intention is offline
Guest
 
Join Date: Feb 2006
Thanks, Hentor, that makes it much clearer. Regarding your questions:

Quote:
Originally Posted by Hentor the Barbarian
Okay, in simple summary:

1) Any constant transformation (regardless of how many you want to list) is not a statistical problem, so I don't see the concern.
Let's take a real example, the transformation of tree rings to global temperature. In general, the transformation procedure is:
a) Select some tree rings to use as proxies.

b) Compare the tree ring data to historical temperature data during part (the "correlation period") of the overlap period of the two datasets, and define a mathematical transformation of tree ring width to temperature.

c) Use the transformation to hindcast the temperature during the other unused part of the overlap period (the "verification period"), and determine if the transformation gives a statistically significant result.

d) Use the verified results to estimate the reconstructed temperature for historical periods for which we have no temperature records.

e) Estimate the errors in the historical period.
As you can see, there are large statistical issues in many parts of this, including:
a) How well does the reconstruction match the temperature during the verification period?

b) How well does the reconstruction need to match the verification temperature in order to be considered valid?

c) Does the tree ring reconstruction temperature need to match the local temperature as well as the global temperature to be considered valid, and if so, how well?

d) How many tree ring datasets are needed to define a global temperature?

e) What is the estimated correlation coefficient (R^2) for the historical reconstruction?

f) What are the error estimates for the various historical periods of the reconstruction?
So yes, transformations can involve very important statistical issues, considerations and questions.

Quote:
Originally Posted by Hentor the Barbarian
2) How can a property, such as temperature, have any sort of agency to select itself, a ham sandwich for lunch, or Al Gore for president?
Blake can correct me if I am wrong, but I believe he is saying that out of the hundreds of climate variables (temperature, humidity, tropospheric lapse rates, etc.), scientists have self-selected one of them.

Quote:
Originally Posted by Hentor the Barbarian
3) What relevance does the concern about Type I error in inferential statistics have for this discussion?
Type I error is a "false positive", meaning that we falsely think that something is significant when actually it occurs by chance. This type of error is very relevant to climate science. The null hypothesis is that a given change in climate records occurs by chance, and a false positive means that we assume that the change is actually due to some external factor, oh, say, change in CO2 ...

The chance for false positives in climate studies is greatly increased because of the general autocorrelation of climate records. To quote from here:

Quote:
Serial correlation is a significant problem because nearly all statistical techniques assume that any random errors are independent. Instead, when serial correlation is present, each error (or residual) depends on the previous residual. One way to think about this problem is that because the residuals are largely redundant (i.e., not independent of one another), the effective degrees of freedom are far fewer than the number of observations. If serial correlation is present but is not accounted for, several nasty things will happen:

2.1. Spurious--but visually convincing--trends may appear in your data (see Figures 1 and 2, above). These may occur in many different forms, including linear trends, abrupt steps, periodic or aperiodic cycles, etc.

2.2. Although the regression coefficients (or other results of your analysis) will still be unbiased, you will underestimate their uncertainties, potentially by large factors. For serial correlation coefficients of rho=0.6 and above, the actual standard error of the regression slope will be more than twice sb, the estimate provided by conventional regression techniques. This effect increases with sample size, and increases drastically with rho (see Figure 3, below). (The symbol rho denotes the true correlation between residuals; it is estimated by r, defined in section 3.2, below.)

2.3. Because uncertainties will be underestimated, confidence intervals and prediction intervals will be too narrow.

2.4. Estimates of "goodness of fit" will be exaggerated.

2.5. Estimates of statistical significance will be exaggerated, perhaps vastly so. Your actual false-positive rate can be much higher than the ?-value used in your statistical tests (see Figure 4). For serial correlation of rho = 0.5, tests for the significance of the regression slope at alpha=5% will have an actual false positive rate of roughly 30% (six times alpha); the rate for alpha=1% will be roughly 15% (15 times alpha), and the rate for alpha=0.1% will be roughly 7% (70 times alpha). If rho=0.9, the false positive rate can be over 50 percent, and largely independent of alpha. Getting more data will not help; this effect increases with sample size. You can also make things much worse by selectively picking out the parts of a time series in which your eye finds an apparent trend, or selectively fitting to nonlinear trends that your eye finds in the data.

Serial correlation can corrupt many different kinds of analyses (including t-tests, ANOVA’s, and the like), but its effects on linear regression are most widely appreciated. Serial correlation is particularly problematic when one is trying to detect long-term trends; in fact, some noted authorities declare that serial correlation makes linear regression invalid for trend detection. These noted authorities are wrong. Serially correlated data can be analyzed with many different methods, including regression, as long as the serial correlation is properly taken into account.
Note that many of these problems involve Type I errors, and that they can be very large if autocorrelation is large. Temperature datasets typically have an alpha (lag 1 correlation) of about 0.8, leading to a huge chance of Type I errors.

Quote:
Originally Posted by Hentor the Barbarian
a) What is a "95% correlation", and since when did it become regarded as any sort of lowest threshold for scientific "meaning"?
Again Blake can correct me if I am wrong, but I believe he is talking about a 95% confidence interval (p<0.05).

Quote:
Originally Posted by Hentor the Barbarian
b) Is there any evidence in support of the common acceptance of 95% correlation for meaningfulness?
This is an arbitrary level. It means that the odds of the findings occurring due to random fluctuations in the data (false positive, or Type I error) are less than one in twenty (5%). This level is commonly used in scientific studies, but it depends on the consequences of false positives. If a false positive would make a large difference, a 99% confidence interval is sometimes used. The IPCC, on the other hand, uses a 90% cutoff.

w.
Reply With Quote
  #76  
Old 09-20-2007, 08:59 PM
Hentor the Barbarian Hentor the Barbarian is offline
Guest
 
Join Date: Jun 2002
Quote:
Originally Posted by intention
Blake can correct me if I am wrong, but I believe he is saying that out of the hundreds of climate variables (temperature, humidity, tropospheric lapse rates, etc.), scientists have self-selected one of them.
Can you tell me how a scientist could "other-select" something? The way you and Blake are using "self" makes absolutely no sense.
Quote:
Type I error is a "false positive", meaning that we falsely think that something is significant when actually it occurs by chance.
Thanks for the didactics. Perhaps I didn't make clear that I'm quite familiar with this.
Quote:
This type of error is very relevant to climate science. The null hypothesis is that a given change in climate records occurs by chance, and a false positive means that we assume that the change is actually due to some external factor, oh, say, change in CO2 ...

The chance for false positives in climate studies is greatly increased because of the general autocorrelation of climate records. To quote from here:

Note that many of these problems involve Type I errors, and that they can be very large if autocorrelation is large. Temperature datasets typically have an alpha (lag 1 correlation) of about 0.8, leading to a huge chance of Type I errors.
If you are using the incorrect analytic procedures, but why would you? Is there some reason why researchers wouldn't use techniques to account for correlated observations?
Quote:
Again Blake can correct me if I am wrong, but I believe he is talking about a 95% confidence interval (p<0.05).
Sure, except that he didn't say that. Which is one of the things I asked for clarification of.
Quote:
This is an arbitrary level. It means that the odds of the findings occurring due to random fluctuations in the data (false positive, or Type I error) are less than one in twenty (5%).
Okay, now it isn't clear if this is being didactic or pedantic. This is irrelevant to the "95% correlation" that Blake brought up.
Reply With Quote
  #77  
Old 09-21-2007, 02:20 AM
intention intention is offline
Guest
 
Join Date: Feb 2006
Quote:
Originally Posted by Hentor the Barbarian
Can you tell me how a scientist could "other-select" something? The way you and Blake are using "self" makes absolutely no sense.
Sure, Hentor, I'd be glad to tell you.

Good scientists use something called "ex ante" criteria. This means that you select some criteria for the phenomena or items of interest, and then you look at the particular instances that fit those criteria. "Ex ante" means that you select the criteria first. An example might make things clearer.

Out of the hundreds of ways to examine the climate, scientists are mainly looking at the reasons for the recent (20 - 30 year) rise in temperatures. That is self selection.

On the other hand, your ex ante criteria could be that you are interested in any 30 year temperature warming trends in the HadCRUT3 1850-2005 dataset that are statistically warmer than the rest of the historical trends ... except, oops, the current 30 trend fails that test.

Or you could look at the 20 year trends using the same criteria ... except the recent trend fails that test too. Both the recent 20 year and 30 year trends are not statistically different from the corresponding trends leading up to the 1940's peak in warmth. A scientist wiser than I once said "Before we waste too much time trying to explain the nature of a phenomenon we should first confirm that the phenomenon exists." Using ex ante criteria, rather than self-selecting the phenomenon, helps us to avoid that error. Before we run off to find an explanation for the recent warming, it lets us know that the recent warming trend is not statistically remarkable in any way.

Here's a second example. Michael Mann selected a number of tree ring and other proxies for his famous "Hockeystick". However, he left out a number of other equally valid proxies. Why? Because rather than using "ex ante" criteria for proxy selection, he simply self-selected them. His chosen proxies made a "hockeystick" ... and the other proxies didn't. Coincidence? You be the judge. But whether the selection was accidental or deliberate, using "ex ante" criteria in place of self-selection avoids this kind of bias.

Quote:
Originally Posted by Hentor the Barbarian
Quote:
Originally Posted by intention
Type I error is a "false positive", meaning that we falsely think that something is significant when actually it occurs by chance.
Thanks for the didactics. Perhaps I didn't make clear that I'm quite familiar with this.
I figured you were familiar with this, Hentor, but we're fighting ignorance here, and I'm also sure that everyone following the thread isn't familiar with this. I prefer to keep my explanations as general as possible. Statistics is a daunting subject, so I try to explain it as I go along so even non-mathematicians can follow the discussion.

It didn't seem that you were familiar with why this is an issue in climate science, though ... which is why I went on to give you the example that you haven't commented on. This was not a theoretical example, it brought up issues that have been mis-handled in a number of tree-ring paleoclimate studies, from the Hockeystick right up to the present.

Quote:
Originally Posted by Hentor the Barbarian
Quote:
Originally Posted by intention
This type of error is very relevant to climate science. The null hypothesis is that a given change in climate records occurs by chance, and a false positive means that we assume that the change is actually due to some external factor, oh, say, change in CO2 ...

The chance for false positives in climate studies is greatly increased because of the general autocorrelation of climate records. To quote from here:

Note that many of these problems involve Type I errors, and that they can be very large if autocorrelation is large. Temperature datasets typically have an alpha (lag 1 correlation) of about 0.8, leading to a huge chance of Type I errors.
If you are using the incorrect analytic procedures, but why would you? Is there some reason why researchers wouldn't use techniques to account for correlated observations?
The main reason seems to be that climate scientists don't know much about statistics. You'd be amazed at the number of climate science papers which don't make any allowance for autocorrelation.

A second reason is that the exact statistical procedures to use in a given part of a fairly complex transformation, like tree rings to temperature, are not always clear or well defined.

The third reason is simple ignorance. Take a look here for a particularly egregious example of statistical nonsense, this one from the IPCC itself.

Quote:
Originally Posted by Hentor the Barbarian
... Sure, except that he didn't say that. Which is one of the things I asked for clarification of.
...
... Okay, now it isn't clear if this is being didactic or pedantic. This is irrelevant to the "95% correlation" that Blake brought up.
Guess I'll have to let Blake answer these last two questions, then.

All the best,

w.
Reply With Quote
  #78  
Old 09-21-2007, 06:43 AM
Hentor the Barbarian Hentor the Barbarian is offline
Guest
 
Join Date: Jun 2002
Quote:
Originally Posted by intention
Sure, Hentor, I'd be glad to tell you.

Good scientists use something called "ex ante" criteria. This means that you select some criteria for the phenomena or items of interest, and then you look at the particular instances that fit those criteria. "Ex ante" means that you select the criteria first. An example might make things clearer.

Out of the hundreds of ways to examine the climate, scientists are mainly looking at the reasons for the recent (20 - 30 year) rise in temperatures. That is self selection.
So you use the terms ex ante and self-selection interchangably? How truly bizzare! Let me make things clearer for you. Self-selection refers to a process by which individuals might bring themselves to the attention of the researchers, meaning that there is something going on that must be accounted for or at least acknowledged by the researcher. Here is an explanatory wikipedia link right back atcha. I'd recommend striving for more precision in your use of terms. It certainly can get a bit confusing.

It is especially confusing when you start suggesting that taking note of and studying some phenomenon is questionable because you "self-selected" it. To be sure, the potential problem of unmeasured explanatory variables is ever-present, but it would be far more erroneous to leave out a clearly related factor from the analysis than to include it.
Reply With Quote
  #79  
Old 09-21-2007, 07:00 AM
intention intention is offline
Guest
 
Join Date: Feb 2006
Quote:
Originally Posted by Hentor the Barbarian
So you use the terms ex ante and self-selection interchangably? How truly bizzare! Let me make things clearer for you. Self-selection refers to a process by which individuals might bring themselves to the attention of the researchers, meaning that there is something going on that must be accounted for or at least acknowledged by the researcher. Here is an explanatory wikipedia link right back atcha. I'd recommend striving for more precision in your use of terms. It certainly can get a bit confusing.

It is especially confusing when you start suggesting that taking note of and studying some phenomenon is questionable because you "self-selected" it. To be sure, the potential problem of unmeasured explanatory variables is ever-present, but it would be far more erroneous to leave out a clearly related factor from the analysis than to include it.
Hentor, mea culpa, you are right about "self-selection", I was using the term incorrectly. You are 100% on target.

My related points about "ex ante" selection, while correct, do not have to do with to the question of self-selection. jshore was using it incorrectly also, saying e.g. "Global temperatures weren't self-selected because they did something dramatic." Ah, well, live and learn, the fight against ignorance continues on all fronts.

Having disposed of that question, perhaps you could comment on the other issue, where you said "Any constant transformation (regardless of how many you want to list) is not a statistical problem, so I don't see the concern." I have provided a variety of citations and examples showing that transformations involve major and very important statistical problems in climate science, and you have not responded.

Many thanks,

w.
Reply With Quote
  #80  
Old 09-23-2007, 01:31 PM
Hentor the Barbarian Hentor the Barbarian is offline
Guest
 
Join Date: Jun 2002
Quote:
Originally Posted by intention
Having disposed of that question, perhaps you could comment on the other issue, where you said "Any constant transformation (regardless of how many you want to list) is not a statistical problem, so I don't see the concern." I have provided a variety of citations and examples showing that transformations involve major and very important statistical problems in climate science, and you have not responded.

Many thanks,

w.
Sorry to have not caught this question before. I don't understand your concern. Transformations are legitimate techniques to deal with some conditions within the data. Most typically, they are used to bring the distribution into a shape for which one can use normal theory statistical techniques. For inferential statistics, if you transform the data essentially so that the relative positions of the observations are not themselves jumbled, there is no concern about the legitimate application of inferential statistics. Your parameter values (betas) will change, and your interpretation of the values may change, since you may have changed the scale of the original data, but the statistical tests to determine significance will still be the same.

Now, I don't know the climate research very well at all to be able to comment on that aspect of things. Do you have examples that clearly demonstrate people making transformations of the data that do shift the relative positions of the observations?
Reply With Quote
  #81  
Old 09-25-2007, 02:23 PM
intention intention is offline
Guest
 
Join Date: Feb 2006
Quote:
Originally Posted by Hentor the Barbarian
Sorry to have not caught this question before. I don't understand your concern. Transformations are legitimate techniques to deal with some conditions within the data. Most typically, they are used to bring the distribution into a shape for which one can use normal theory statistical techniques. For inferential statistics, if you transform the data essentially so that the relative positions of the observations are not themselves jumbled, there is no concern about the legitimate application of inferential statistics. Your parameter values (betas) will change, and your interpretation of the values may change, since you may have changed the scale of the original data, but the statistical tests to determine significance will still be the same.

Now, I don't know the climate research very well at all to be able to comment on that aspect of things. Do you have examples that clearly demonstrate people making transformations of the data that do shift the relative positions of the observations?
I gave an example before, I think you might have missed it. I said:

Quote:
Originally Posted by intention
Let's take a real example, the transformation of tree rings to global temperature. In general, the transformation procedure is:
a) Select some tree rings to use as proxies.

b) Compare the tree ring data to historical temperature data during part (the "correlation period") of the overlap period of the two datasets, and define a mathematical transformation of tree ring width to temperature.

c) Use the transformation to hindcast the temperature during the other unused part of the overlap period (the "verification period"), and determine if the transformation gives a statistically significant result.

d) Use the verified results to estimate the reconstructed temperature for historical periods for which we have no temperature records.

e) Estimate the errors in the historical period.
As you can see, there are large statistical issues in many parts of this, including:
a) How well does the reconstruction match the temperature during the verification period?

b) How well does the reconstruction need to match the verification temperature in order to be considered valid?

c) Does the tree ring reconstruction temperature need to match the local temperature as well as the global temperature to be considered valid, and if so, how well?

d) How many tree ring datasets are needed to define a global temperature?

e) What is the estimated correlation coefficient (R^2) for the historical reconstruction?

f) What are the error estimates for the various historical periods of the reconstruction?
So yes, transformations can involve very important statistical issues, considerations and questions.
That was the example I was asking you to explore.

w.
Reply With Quote
  #82  
Old 09-28-2007, 07:26 PM
wevets wevets is offline
Guest
 
Join Date: Mar 2000
Begging everyone's pardon for jumping in 80 posts into the thread, but there's something here I just don't understand.

Quote:
Originally Posted by intention
Let's take a real example, the transformation of tree rings to global temperature. In general, the transformation procedure is:

a) Select some tree rings to use as proxies.

b) Compare the tree ring data to historical temperature data during part (the "correlation period") of the overlap period of the two datasets, and define a mathematical transformation of tree ring width to temperature.

c) Use the transformation to hindcast the temperature during the other unused part of the overlap period (the "verification period"), and determine if the transformation gives a statistically significant result.

d) Use the verified results to estimate the reconstructed temperature for historical periods for which we have no temperature records.

e) Estimate the errors in the historical period.

What is the transformation here that you object to? You've provided a very vague example, with no actual numbers that we can examine.

When I think of a statistical transformation, I think of this type of example: I have data that are heteroscedastic, and therefore do not meet the required assumptions to perform a simple Model I ANOVA. I take the logarithm of both sides of the equation, and examining the logarithms, I see that the transformed variables are homoscedastic. Now I can perform my ANOVA.

If someone objects to my results and says my transformation was suspect, we can now talk about whether the logarithm was an appropriate transformation.


Your reply to Hentor's question:

Quote:
Originally Posted by Hentor the Barbarian
Do you have examples that clearly demonstrate people making transformations of the data that do shift the relative positions of the observations?

...In a rather non-linear fashion, not actually providing an example that shifts the relative position of the observations. In fact, an example without any actual observations, just a category of observations that might be made (tree rings.)

In short, could your example become a little more specific so we can examine the transformation with which you disagree?

i.e. What is the equivalent of the logarithm that you suspect?



It's also possible that I misunderstand you, and that you don't have a disagreement with a statistical transformation, but instead with this:

Quote:
Originally Posted by intention
b) Compare the tree ring data to historical temperature data during part (the "correlation period") of the overlap period of the two datasets, and define a mathematical transformation of tree ring width to temperature.
The relationship between tree rings and temperature is governed by physical factors in the real world, and so is not necessarily the same as the type of purely statistical transformation I gave an example of above. To avoid confusion, I'll call it the "proxy relationship" instead. Is this where you have a disagreement? If so, it'd be nice to know what specific proxy relationship between tree rings and temperature you object to.


You ask some very good questions, intention:

Quote:
Originally Posted by intention
a) How well does the reconstruction match the temperature during the verification period?

b) How well does the reconstruction need to match the verification temperature in order to be considered valid?

c) Does the tree ring reconstruction temperature need to match the local temperature as well as the global temperature to be considered valid, and if so, how well?

d) How many tree ring datasets are needed to define a global temperature?

e) What is the estimated correlation coefficient (R^2) for the historical reconstruction?

f) What are the error estimates for the various historical periods of the reconstruction?
And shouldn't we try to answer these questions rather than just asking them? We can do that much better if you'll let us know which paper you're talking about.
Reply With Quote
  #83  
Old 09-29-2007, 05:22 AM
intention intention is offline
Guest
 
Join Date: Feb 2006
Quote:
Originally Posted by wevets
Begging everyone's pardon for jumping in 80 posts into the thread, but there's something here I just don't understand.




What is the transformation here that you object to? You've provided a very vague example, with no actual numbers that we can examine.

When I think of a statistical transformation, I think of this type of example: I have data that are heteroscedastic, and therefore do not meet the required assumptions to perform a simple Model I ANOVA. I take the logarithm of both sides of the equation, and examining the logarithms, I see that the transformed variables are homoscedastic. Now I can perform my ANOVA.

If someone objects to my results and says my transformation was suspect, we can now talk about whether the logarithm was an appropriate transformation.


Your reply to Hentor's question:




...In a rather non-linear fashion, not actually providing an example that shifts the relative position of the observations. In fact, an example without any actual observations, just a category of observations that might be made (tree rings.)

In short, could your example become a little more specific so we can examine the transformation with which you disagree?

i.e. What is the equivalent of the logarithm that you suspect?



It's also possible that I misunderstand you, and that you don't have a disagreement with a statistical transformation, but instead with this:



The relationship between tree rings and temperature is governed by physical factors in the real world, and so is not necessarily the same as the type of purely statistical transformation I gave an example of above. To avoid confusion, I'll call it the "proxy relationship" instead. Is this where you have a disagreement? If so, it'd be nice to know what specific proxy relationship between tree rings and temperature you object to.


You ask some very good questions, intention:



And shouldn't we try to answer these questions rather than just asking them? We can do that much better if you'll let us know which paper you're talking about.
wevets, you need no pardon for coming in whenever you arrive, you raise good points.

The reason we started discussing the question was the claim by Hentor the Barbarian that transformations don't involve statistics ... he said:

Quote:
Any constant transformation (regardless of how many you want to list) is not a statistical problem, so I don't see the concern.
For a specific example that illuminates many of the issues I am pointing to, we could take a look at the ill-fated "Hockeystick" paper of Mann, Bradley, and Hughes. There's a good discussion of one of the many statistical aspects of the Mann transformation, the calculation of the confidence intervals, located here.

It appears that I am using "transformation" in perhaps a more general sense than you are. Mann, for example, starts with a dataset that is comprised of a time-series of tree ring widths, and ends up with a dataset that he says represents a time-series of Northern Hemisphere temperatures. This is the type of transformation to which I am referring. Call it a "proxy relationship" if you wish, but the subject matter of the datasets is meaningless. One real-world dataset is transformed, through a variety of mathematical operations, into another real-world dataset. This could be satellite microwave strength data transformed into atmospheric temperature data, or French grape harvest dates transformed into summer temperature data. The subject matter is not the point, nor is the particular type of tranformation used.

In Mann's case, it is a "principal components" (eigenvectors) method which is used to transform one dataset into another. Unfortunately, he made some errors in the process ... and he has not revealed his method for calculating the uncertainties in the process. But regardless of whether he did the math correctly (he didn't) or revealed his mystery method for calculating the uncertainty (he didn't), there are a wide variety of statistical questions involved in the whole procedure.

I have listed a number of statistical questions relating to the type of transformation done in Mann's paper, including:

Quote:
Originally Posted by intention
a) How well does the reconstruction match the temperature during the verification period?

b) How well does the reconstruction need to match the verification temperature in order to be considered valid?

c) Does the tree ring reconstruction temperature need to match the local temperature as well as the global temperature to be considered valid, and if so, how well?

d) How many tree ring datasets are needed to define a global temperature?

e) What is the estimated correlation coefficient (R^2) for the historical reconstruction?

f) What are the error estimates for the various historical periods of the reconstruction?
I was not looking for answers to these questions. I was merely pointing out that, contrary to Hentor's claim, there are a number of interesting and difficult statistical questions involved in the types of transformations used in climate science. Anyone who wants to take a hard look at the majority of climate science papers and studies has to be able to understand the statistics involved. Many of the current claims of climate science are based on studies which ignore even very basic statistical principles and concepts. As my father used to say, "Son, the large print giveth, and the small print taketh away." And in climate studies ... statistics are the small print.

All the best to you,

w.
Reply With Quote
  #84  
Old 10-07-2007, 05:07 AM
Blake Blake is offline
Member
 
Join Date: Mar 2001
Posts: 10,207
Quote:
Originally Posted by Hentor the Barbarian
A good deal of what you write in this post makes sense. Some of it doesn't. What do you mean by "self-selected"? Usually when we say that in the social sciences, we mean that there was some mechanism by which the respondents in the data set identified themselves.
In the physical sciences we mean essentially the same thing: the data have drawn themselves to our attention rather than having been collected post hoc.

Quote:
How is it that our global temperatures have distinguished themselves from other global temperatures?
By the mechanism I outlines above: we have only spent billions of dollars investigating possible correlations with tmeperature becuse the temperature is noticably increasing. Let me ask you this: if the global temperature had been nearly perfectly stable for the past 100 years do you honestly believe we would have spent the same amount of money investigating the correlation with atmospheric conditions, pollution, aersols, ENSO and so forth?

To me the answer to that seems self evident: of course we wouldn't have. Just as we have spent very little investigating the correlation of those factors with the sex ratios of crocodiles or the incidence of fire on the Kalahari. Properties that remain stable aren't intensely investigated.

And that is where the problem comes in. How many times have you read of a "disease cluster" associated with some elementary school, or small town or profession? People then go out of their way to investigate the cause, and almost always it turns out to be nothing and vanished within 1 generation. That's because its a self-selected sample. The only reason anyone investigated the causes of the disease was because a lot of people in one place got the disease.

But the problem is that disease is random and most people don't understand what random means. Random doesn't mean evenly distributed, it means unpredictable. Often random events will be evenly ditributed, but occassionaly they will also from distinct clusters and trends. The problem is that we notice the clusters and trends, they draw themselves to our attention, they self select.

The same appllies to global temperature. Someone didn't set out one day to do a full analysis of global temeprature with all the dozens of factors that are needed to make the current AGW models work. Instead someone noticed temperature was rising and then set out to see what it was. But as soon as they did that they introduced a masisve potentila flaw into any future science: they were using a self-selected sample and thus had increased their smaple space infinitely.

With a >90% correlation if we look at 10 differnt factors then CO2 will always correlate with one of them. That is basic statistics. In this case the correlation is with global temperature. But what about the other psossible fatirs, they are our sample space.

For example what if global temperatures had been stable when all this kicked off in the 1980s but the number of male crocodiles being born had been increaisng dramatically? Or if the incidence of fires on the Kalahari had been declining dramatically? And so on and so forth for another 10 possible factors that coudl be attributed to CO2 levels. We cna be absolutely sure that one of those fators would have been correlated with CO2 levels to a 90% level.

And that is how our gobal temperatures are different to other possible global tempratures, they are different precisky because they were incraesing dramatically in the mid 1980s. They drew themselves to our attention. If they had not been changing then some other factor woyld have been just as storngly correlated with CO2 levels.


Quote:
If they've gotten hotter, there must be some mechanism, but to say that they are self-selected suggests some sort of agency on the part of temperature, and some host of other, contemporaneous, temperatures that they've distinguished themselves from (other than by simply changing over time).
It is the very fact that they are are increainsg rapidly that has selcted themselves. They have formed atrend, and the human mind is desgiged to notice trends and clusters. Once we noticed that trend we went out and looke dfrom somehting to explain it, but at only 90% correlation we knwo that many trends had to correlate to rising CO2 levels. That doesn't mean that they are causative because our sample space is far larger than 10 possibel fators.


Quote:
This is why there are corrections to be employed for multiple comparisions. Even if you look at one comparison in one data set, adopting the alpha level of .05 means that you are accepting an error rate of 5 in 100. Put another way, you're saying that you'd expect to see differences of a given size in variance between groups arise simply by chance (rather than by the mechanism of the study) fewer than 5 times out of 100. That's why replication with different data sets is important.
Yes, but more importantly it is why it is absolutely vital to know your sample space.

Imagine I conduct a trial on plant growth under C02 enrichment, then my sample space is 1; the a single trial. I find that my plants produce more biomass than the last crop grown in that plot. In that case a 95% confidence level is acceptable because the chance of having got that result by chance is tiny.

Now imagine that while driving I notice a single garden plot growing in an industrial area is producing large plants. I collect data from that site and report a 90% confidence that the growth is caused by CO2 levels. Now is that result worthwhile? No of course it isn't because you have no idea what my sample space was. I could have driven past a million sites each day growing right next door and ignored them because they were 'normal'.

Now do you undertsand why self-selected samples are statistically dodgy? They are dodgy because we can never know what our sample space was. What we saw could very easily coincide with the factors measured simply because we unintentionally sampled just one of millions of data points.

Quote:
Once you show signficant differences on a given factor in two different data sets, your confidence in the robustness of the relationship goes up.
Only if you incorporate all the datasets you examined, ie your sample space.

For example if I notice plants growing larger in two CO2 enriched environments but every day I drove past two thousand instances of no effect and three instances of shrinkage then my confidence in the robustness of the relationship doesn't increase at all. And when I only start to collect data because I have seen a size increase that is exactly what I have been doing.

Quote:
Yes. It's an observed phenomenon. That's irrelevant to the issue of multiple comparisons or the acceptance of a false positive rate of 5 times in 100.
No, it isn't irrelevant because it entirely defines our sample space. We only started looking for correlation after we knew of the phenomenon and, more importantly, because it existed.

Quote:
That's like saying that there is something suspect about studying what influences depression, or crime rates, or cancer deaths, just because your attention was drawn to them.
No, no no.

What it is like is noticing that the people in a certain school are suffering from more cancer deaths and then going out looking for a cause for the increase based only on that school. If the cancer deaths in that school happen to correlate well to the CO2 levels you then attribute the cancer to CO2.

Can you not see how statistically invalid that methodology is? What about the school right next door where the CO2 levels are identical and people suffer no more cancer than the general population? What about the millions of other schools where there is no change in cancer rates.

With a self selected sample you have to very careful that you aren't trying to find a cause for something that is a statistical artefact. There is no cause for the increase in cancer in the school so there is no point trying to explain it. If you restrict your study to that one school I will guarantee that you will find multiple factors that correlate to the cancer increase with >90% confidence. That doesn't prove causation. It just proves that if you select an event because it forms a trend or cluster then you will find numeorus correlative factors at a 90% confidence level.


Quote:
What else is science about but trying to explain observed phenomena?
Religion and philosophy ar ealso about explaining observed phenomena. To be called science we need a few additions:


1) It is about replicability. In the case of AGW we have none.
2) It is about predictability. In the case of AGW We have
3) It is about statistical rigor. I the case of AGW we have massive problems here.
4) It is about logically valid argument. A soon as we start ignoring smaple space we have no logical argument.

I've said it before and I'll say it again, when someone uses AGW theory to make a prediction about the real world that couldn't be made to the same confidence by assuming a constant trend from 1860 then I will call it science. Until then it's just not science.
Reply With Quote
  #85  
Old 10-07-2007, 08:17 AM
jshore jshore is offline
Charter Member
 
Join Date: May 2000
Posts: 6,460
Blake, I guess you must have missed my post #71 or else you would not have repeated your ahistorical nonsense or at least would have responded to what I said.

Quote:
Originally Posted by Blake
I've said it before and I'll say it again, when someone uses AGW theory to make a prediction about the real world that couldn't be made to the same confidence by assuming a constant trend from 1860 then I will call it science. Until then it's just not science.
Again, as I noted, try James Hansen in the late 1980s. He made the prediction that the temperatures would continue rising at a time when many if not most scientists thought it was too early to claim that any temperature trend due to rising CO2 levels had actually emerged from the noise.
Reply With Quote
  #86  
Old 10-08-2007, 03:45 AM
intention intention is offline
Guest
 
Join Date: Feb 2006
Quote:
Originally Posted by jshore
Again, as I noted, try James Hansen in the late 1980s. He made the prediction that the temperatures would continue rising at a time when many if not most scientists thought it was too early to claim that any temperature trend due to rising CO2 levels had actually emerged from the noise.
Oh, please. The temperature graph included with Hansen's 1988 prediction shows that the temperature bottomed out in 1964, and rose thereafter. This means that the global temperature had been rising for almost a quarter century when James Hansen made his oh-so-daring prediction that temperatures would continue to rise. What rate did he predict they would rise at? Well ... at the same rate that they had been rising for a quarter century ...

And if the "rising CO2 levels had actually emerged from the noise" since 1988, we'd have seen it in the record. This is not the case, the trend of the post 1988 rise is statistically indistinguishable from the trend of the 1915-1945 rise, despite a much greater increase in CO2 during the recent rise.

w.
Reply With Quote
  #87  
Old 10-08-2007, 07:07 AM
Hentor the Barbarian Hentor the Barbarian is offline
Guest
 
Join Date: Jun 2002
Quote:
Originally Posted by Blake
In the physical sciences we mean essentially the same thing: the data have drawn themselves to our attention rather than having been collected post hoc.
I'm trying to puzzle my way through this claptrap. How do data draw their attention to us? What agency do they have? If you are collecting them post hoc, doesn't that mean that you have already attended to them? What is the hoc you are working post, here?
Quote:
Let me ask you this: if the global temperature had been nearly perfectly stable for the past 100 years do you honestly believe we would have spent the same amount of money investigating the correlation with atmospheric conditions, pollution, aersols, ENSO and so forth?
No, obviously not, because we prioritize based on perceived importance of the problem. But again, cancer receives a lot of attention, as well, and is a legitimate issue for study, right?
Quote:
Properties that remain stable aren't intensely investigated.
Only presuming that their stability does not threaten us. If levels of violence are high and stable, they are going to get a lot more attention than low but stable violence.
Quote:
But the problem is that disease is random and most people don't understand what random means. Random doesn't mean evenly distributed, it means unpredictable. Often random events will be evenly ditributed, but occassionaly they will also from distinct clusters and trends. The problem is that we notice the clusters and trends, they draw themselves to our attention, they self select.

The same appllies to global temperature. Someone didn't set out one day to do a full analysis of global temeprature with all the dozens of factors that are needed to make the current AGW models work. Instead someone noticed temperature was rising and then set out to see what it was. But as soon as they did that they introduced a masisve potentila flaw into any future science: they were using a self-selected sample and thus had increased their smaple space infinitely.
You seem to be confusing generalizability from a restricted sample and "self-selection." Sure, if your sample is not representative of the entire population, then you cannot generalize your findings. Which aspect of global temperatures do you think is not representative of the entire population? Your mangled application of the term "self-selection" still is different from the issue you see as problematic, namely that of restricted samples.
Quote:
Now do you undertsand why self-selected samples are statistically dodgy? They are dodgy because we can never know what our sample space was. What we saw could very easily coincide with the factors measured simply because we unintentionally sampled just one of millions of data points.
I've always understood why both self-selected (in the typical meaning of the term) and restricted samples are dodgy. I still have no idea what you are on about vis a vis global temperatures.
Quote:
Only if you incorporate all the datasets you examined, ie your sample space.
No. Multiple comparisons will be problematic in each separate data set. As long as you are using inferential statistics and establishing an alpha value, you are saying "This is the level of false positives that I will accept." It doesn't change until you have the entire population measured. At that point, you don't need to use inferential statistics, because the stats would be the stats, with nothing to infer.
Quote:
No, it isn't irrelevant because it entirely defines our sample space. We only started looking for correlation after we knew of the phenomenon and, more importantly, because it existed.
This is just silly. We cannot know about phenomenon that don't exist. Of course we study things that we know about, and of course we study those things we know about that might impact our lives dramatically.

In summary: Self-selection does not mean "selected by the researcher." It means that the individuals selected themselves into the study sample.

Restricted samples are those which do not represent the population at large. This may be because of a self-selection process, or for other reasons.

It is perfectly legitimate to study something that has come to your attention, and in fact it makes zero sense to study things that you don't know exist. You study things you know about in samples that as best as possible reflect the population. You look for replication across samples to feel more confident about the robustness of the findings.

You use inferential statistics to make estimations about the relationships you see within the sample extended out to whatever the larger population is. You accept a certain error rate, and adjust if you are making multiple comparisons that would exceed that rate.
Reply With Quote
  #88  
Old 10-08-2007, 07:32 PM
jshore jshore is offline
Charter Member
 
Join Date: May 2000
Posts: 6,460
Quote:
Originally Posted by intention
Oh, please. The temperature graph included with Hansen's 1988 prediction shows that the temperature bottomed out in 1964, and rose thereafter. This means that the global temperature had been rising for almost a quarter century when James Hansen made his oh-so-daring prediction that temperatures would continue to rise. What rate did he predict they would rise at? Well ... at the same rate that they had been rising for a quarter century ...
Well, that is a rate significant enough that if maintained over a period of time causes a reasonably significant rise in temperature...and it was certainly by no means obvious to everybody that it would continue to rise. With many of the people who believe it is all the sun arguing that we are in for a cooling due to the solar cycles, it will be interesting to see what they say 20 years from now assuming that the rise has continued.

Quote:
And if the "rising CO2 levels had actually emerged from the noise" since 1988, we'd have seen it in the record. This is not the case, the trend of the post 1988 rise is statistically indistinguishable from the trend of the 1915-1945 rise, despite a much greater increase in CO2 during the recent rise.
Well, as I discussed in the other current thread on global warming, it is not just the trend but the pattern of the warming and what natural forcings are or are not occurring that are important. The early century warming can be explained by known natural forcings (along with some small contribution from greenhouse gases).
Reply With Quote
  #89  
Old 10-18-2007, 01:10 AM
intention intention is offline
Guest
 
Join Date: Feb 2006
jshore, thanks for your post. You say

Quote:
Originally Posted by jshore
Well, as I discussed in the other current thread on global warming, it is not just the trend but the pattern of the warming and what natural forcings are or are not occurring that are important. The early century warming can be explained by known natural forcings (along with some small contribution from greenhouse gases).
For a discussion of "what natural forcings are or are not occurring", and how well they are modeled by the GCMs, see The Fine Art of Fitting Elephants.

And for a discussion of a natural forcing not included in the climate models, see here.

My best to everyone,

w.
Reply With Quote
  #90  
Old 10-18-2007, 07:35 PM
intention intention is offline
Guest
 
Join Date: Feb 2006
And for a discussion of a natural feedback which is neglected by the climate models, is much larger than CO2, and has a correlation of 0.63 with the 1983-2003 temperature, see here. The idea that CO2 is required to explain the earth's post-1980 temperature history is an inconvenient lie.

w.
Reply With Quote
  #91  
Old 10-18-2007, 09:36 PM
jshore jshore is offline
Charter Member
 
Join Date: May 2000
Posts: 6,460
Quote:
Originally Posted by intention
For a discussion of "what natural forcings are or are not occurring", and how well they are modeled by the GCMs, see The Fine Art of Fitting Elephants.

And for a discussion of a natural forcing not included in the climate models, see here.
intention, thanks for your post. There are various problems with the cosmic ray hypothesis but, regardless of whether or not they have some influence in general, the biggest problem seems to be that cosmic rays do not show any significant overall trend in recent decades! Shaviv claims otherwise in the comments section on that post about cosmic rays but if you look at his Fig. 3, I don't see how you come up with any significant trend...just some oscillations. (I think there have been other issues identified with the supposed correlation between low clouds and cosmic rays shown there but I don't remember what they are. Strangely, Shaviv's claim that there is a trend is based on a vague statement about there having been a trend in solar activity in the whole 20th century made in a paper that completely disagrees with him on this being a plausible explanation of the late 20th century warming.

If it so easy to tune a climate model to give the temperature record (as you have claimed in the past), then some of these solar / cosmic ray hypothesis folks ought to be able to do this to get a good agreement to the instrumental temperature record over its entire ~150 year history using only natural forcings. Surely they must be able to gain access to one...I think there is even some models publicly available.

At some point, the folks proposing alternative hypotheses actually have to start showing in detail how they compare to the available data. It is strange that the standard seems to be if I can cobble together any sort of plausible hypothesis then we are all supposed to "stop the presses" and can no longer accept the dominant theory which has lots and lots of supporting evidence.
Reply With Quote
  #92  
Old 10-18-2007, 10:05 PM
jshore jshore is offline
Charter Member
 
Join Date: May 2000
Posts: 6,460
Quote:
Originally Posted by intention
And for a discussion of a natural feedback which is neglected by the climate models, is much larger than CO2, and has a correlation of 0.63 with the 1983-2003 temperature, see here. The idea that CO2 is required to explain the earth's post-1980 temperature history is an inconvenient lie.
See here for some discussion of this abedo stuff. And, is a 0.63 correlation over a 20-year period really that impressive? I hardly think that the graph shown in your link looks very much like an (upside down) plot of the global temperatures over the same time period.

By the way, you have chided me before for looking at what you (in that case incorrectly) called a "press release" on a scientific article. In the link you provided, you have to click on a link within just to get as close as a press release on the paper! Here is the full paper; it is interesting that, while noting that these changes are something we need to better understand the origin and the effects of, they don't make any claims that their observations are in conflict with the AGW hypothesis. Here, for example is the first paragraph of their paper:

Quote:
The increase in global mean surface temperature over the past 50 years is believed to result in large part from the anthropogenic intensification of the atmospheric greenhouse effect. Although greenhouse gases modulate Earth's long-wave [infrared (IR)] emission, Earth's climate also depends on the net absorbed shortwave (SW) solar energy as quantified by the solar constant and Earth's albedo (the fraction of sunlight reflected back into space). The 0.1% variation of the solar constant over the 11-year solar cycle is widely regarded as insufficient to account for the present warming (1). However, it is not unreasonable to expect that changes in Earth's climate would induce changes in Earth's albedo and vice versa. Earth's surface, atmospheric aerosols, and clouds all reflect some of the incoming solar radiation, preventing that energy from warming the planet. Any changes in these components would likely change the albedo but would also induce climate feedbacks that complicate efforts to precisely assess causes and effects (2). The past two decades have seen some efforts to measure and monitor Earth's albedo from space with successful missions such as the Earth Radiation Budget Experiment (ERBE) or the Clouds and the Earth's Radiant Energy System (CERES) mission (3). However, a continuous long-term record of Earth's albedo is difficult to obtain because of the complex intercalibration of the various satellite data and the long temporal gaps in the series.

Last edited by jshore; 10-18-2007 at 10:08 PM..
Reply With Quote
  #93  
Old 10-21-2007, 11:44 PM
intention intention is offline
Guest
 
Join Date: Feb 2006
Quote:
Originally Posted by jshore
See here for some discussion of this abedo stuff. And, is a 0.63 correlation over a 20-year period really that impressive? I hardly think that the graph shown in your link looks very much like an (upside down) plot of the global temperatures over the same time period.
Is a 0.63 correlation over a 20 year period "impressive"? Correlations in the world of climate don't generally run all that high. Michael Mann's hockeystick is based on tree rings with far worse correlation than 0.6 - 0.7. The comparable correlation of CO2 with temperature over the same period is about the same, it's 0.60, but somehow that seems to impress you a lot. Why is the correlation of temperature with CO2 convincing to you, but not the correlation of temperature with albedo?

Your thesis has been "CO2 explains the recent warming, and the sceptics have no other hypothesis to explain it". I provide a hypothesis (albedo changes) and you say it doesn't explain it well enough ... you're grasping at straws. The reality is that nothing explains climate very well, that's why the discussion continues.

Quote:
Originally Posted by jshore
By the way, you have chided me before for looking at what you (in that case incorrectly) called a "press release" on a scientific article. In the link you provided, you have to click on a link within just to get as close as a press release on the paper! Here is the full paper; it is interesting that, while noting that these changes are something we need to better understand the origin and the effects of, they don't make any claims that their observations are in conflict with the AGW hypothesis. Here, for example is the first paragraph of their paper:

Quote:
The increase in global mean surface temperature over the past 50 years is believed to result in large part from the anthropogenic intensification of the atmospheric greenhouse effect. Although greenhouse gases modulate Earth's long-wave [infrared (IR)] emission, Earth's climate also depends on the net absorbed shortwave (SW) solar energy as quantified by the solar constant and Earth's albedo (the fraction of sunlight reflected back into space). The 0.1% variation of the solar constant over the 11-year solar cycle is widely regarded as insufficient to account for the present warming (1). However, it is not unreasonable to expect that changes in Earth's climate would induce changes in Earth's albedo and vice versa. Earth's surface, atmospheric aerosols, and clouds all reflect some of the incoming solar radiation, preventing that energy from warming the planet. Any changes in these components would likely change the albedo but would also induce climate feedbacks that complicate efforts to precisely assess causes and effects (2). The past two decades have seen some efforts to measure and monitor Earth's albedo from space with successful missions such as the Earth Radiation Budget Experiment (ERBE) or the Clouds and the Earth's Radiant Energy System (CERES) mission (3). However, a continuous long-term record of Earth's albedo is difficult to obtain because of the complex intercalibration of the various satellite data and the long temporal gaps in the series.
I didn't link to the full paper because it requires a subscription ... my bad. I should have linked to both the article and the subscription-requiring original paper.

Are their observations "in conflict" with the AGW hypothesis? I haven't a clue, because I don't know what the "AGW hypothesis" is. If the AGW hypothesis is that humans cause all of the global warming through GHG increase, yes, it is in conflict. If the hypothesis is that humans cause all of the global warming through human-made changes in the albedo, it's not in conflict at all.

This is part of the reason why so much uncertainty surrounds the subject ... people keep talking about the "AGW hypothesis" as though it were a known claim, as if it were a standard, falsifiable hypothesis of the kind we are used to seeing in science.

Instead, it changes with each iteration, and seems to be something on the order of "people caused some unknown amount of the warming of the last fifty years" ... hard to be "in conflict" with that, it's mush.

Perhaps you could fight our ignorance here and spell it out for us.

1) Exactly what did you mean by the term "AGW hypothesis" above?

2) Exactly what did the authors of the paper you cited mean by that term?, and

3) Exactly how are their findings about albedo not "in conflict with the AGW hypothesis"?

Many thanks,

w.
Reply With Quote
Reply



Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 07:22 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.

Send questions for Cecil Adams to: cecil@chicagoreader.com

Send comments about this website to: webmaster@straightdope.com

Terms of Use / Privacy Policy

Advertise on the Straight Dope!
(Your direct line to thousands of the smartest, hippest people on the planet, plus a few total dipsticks.)

Publishers - interested in subscribing to the Straight Dope?
Write to: sdsubscriptions@chicagoreader.com.

Copyright © 2013 Sun-Times Media, LLC.