By following the link at the AP story to the University of Hawaii’s Mora Labs site, I found this link to the article at Nature magazine’s website, The projected timing of climate departure from recent variability. It also appears in the 10 October 2013 issue of Nature.
It sounds pretty scary, to be honest. Right now, here in Las Vegas, we get upwards of 46ºC (115ºF) in the summers for days at a time (I think we had 9 days in a row plus a few scattered days this past summer). The thought of it being just a degree or two hotter is enough to make me wince, and I like it hot.
In doing my own research about the authors and their paper, they certainly seem to have the right credentials to be making this report, and it looks to me as if it was peer-reviewed; is there anyone reputable who is debunking it?
And if it is accurate, what are the implications?
Will it have any repercussions on industry, government, trade, etc.? Should it?
Will there be a loud cry of “bullshit!” from those already denying GCC, or will this cause them to reassess their positions? Should it?
As always, I look forward to reading y’all’s thoughts on this.
The article itself is here (dunno if there are firewall issues).
Speaking as a nonspecialist, AFAICT there is nothing particularly new in these results. The data and the climate models appear to be from standard existing sources:
It’s an interesting approach to attempting to quantify the near- to mid-term impacts of climate change, but of course a press release or news article is going to make everything sound more dramatic or groundbreaking than it is. For instance, singling out one particular year for each city to enter a historically unprecedented climate state is not really meaningful, given that each prediction has an uncertainty of several years. I got twenty bucks says that Singaporeans in 2028, for example, will not suddenly start experiencing their climate significantly differently from the way they did in 2026 and 2027.
This study seems to be looking at an arbitrary marker (“in what year will all future years fall outside the bounds of historical climate variability”) to characterize a trend that is actually chaotic and fluctuating in the short term, although with a distinct upward tendency in the medium and long term. It may make it easier to intuitively understand the effects of rising temperatures, but AFAICT it doesn’t change our basic scientific understanding of any part of the climate change process.
One has to remember that recently Nature also posted very a scary paper that pointed at the temperature increase getting closer into causing huge Methane gas releases making the situation worse.
Many of the recommended scientists and sites I look for information pointed at some flaws in that methane study that lead them and many others to conclude that the danger was not as bad as the report made it to be, (too bad for the narrative of many contrarian sites that still insist that the scientists are alarmists, in reality many reports that should be embraced by the scientists if the scientists were alarmists are cut down to size by them.)
We’ll see on this case how reliable the study is after others more capable take a look at it. The difference in this case seems to be that this study relies and expands on previous research, added to many other previous ones it points at the levels of confidence that scientists are reaching on less controversial projections for the future.
This is the part that is reassuring, and helps me accept the standard scientific consensus: there really are debates – serious and meaningful disagreements among scientists. They’re simply debates over refined details of the primary concept. They don’t argue over climate change: they argue over minor apportionment of various physical causes, and over small differences in the measured rate of increase.
As you note, if it were all a fraud from the ground up, they’d make up a single story and everyone would stick to it.
I read the article and saw no indication that the computer model has been actually tested in the manner I described.
Anyway, my reaction is that you shouldn’t put much stock in a prediction of the behavior of a complex system if that prediction is based on a computer model and you do not know if the computer model has been actually tested in the manner I described.
I take it you are declining to provide a definition for “GCC”?
So can I take it that “GCC” includes natural events such as the Little Ice Age or the Medieval Warm Period or the most recent Ice Age?
And that GCC includes changes with have nothing to do with emissions, such as possible warming due to changes in land use?
And that GCC includes changes which are measurable but unlikely to cause significant harm, such as the modest CO2-caused warming mentioned by Richard Lindzen?
Tested in the manner I described? Or tested some other way, such as by comparing the model to history?
Are there climate scientists who consider it “testing” to check whether the model accurately models the known past?
I would have thought that accurate representation of what we already know happened is a necessary but not sufficient condition for acceptance as a climate model.
ISTM, there’s a difference between checking that the model accurately reflects the things on which the model is based, and checking that the model accurately predicts past events on which the model was not based.
Obviously, any model should accurately reflect the past inputs, and presumably there are a near infinite number of models that would do so. So if I say that in three out of four presidential elections the candidate born at a further-south latitude wins, the one thing we should know for sure is that the model accords with all known birth places.
But “hindcasting” could be confirmatory evidence to the extent it was unknown or not used as an input in crafting the model in the first place. Taking the example above, if we then learned about the birthplace of 20 additional former Presidents previously obscured in the mists of time, and they corresponded to the model, then I think this would count as confirmatory evidence despite it not being a future prediction. No?
So I guess my question is which category “hindcasting” testing of these models falls into. Anyone know?
In theory, you are right. But there are two problems. First is the problem of fudging/cheating. Everyone has access to past climate data . . . how do you stop people from subconsciously or consciously tweaking their models to fit all of the data? How do you stop them from quietly discarding the models which failed to fit all of the data?
Perhaps even worse, there is the problem of selection. Any model which does not pass the hindcasting test will, at some stage in the process, be discarded. According to chronos, such a model will apparently not be published. So regardless of whether they are correct or not, any model which gets attention will of course pass the hindcasting test. So the fact that it passed the test does not mean much.
To use your example of presidents, imagine if 10,000 different hypotheses a year were created about which president wins. Even if all of the different hypothesizers honestly base their hypotheses on only part of the data, you can bet that a few of the hypotheses – no matter how spurious – will do a pretty good job at hindcasting the rest of the data.
I had thought that we were getting previously unknown data about historical climate pretty much every year from new ice samples, new geological methods, etc. It seems to me that if this stuff fits the model, then it is genuinely confirmatory (inferentially, of course, and therefore marginally rather than deductively and conclusively).
Well, there’s selection and there’s selection. Obviously, on some level, science is precisely what you describe–the selection of models based on which models best represent the empirical data we have. With sufficient data and sufficient complexity, it actually does take a lot of random models before you get one that models the entire data set (not by design, see above) but isn’t actually reflective of any true principle or underlying trend.
The additional principle you’re looking for, beyond correlation, is a plausible account of causation. So it seems to me that if you end up with both a strong correlation to data not used to shape the model and a plausible account of why the math you’re using reflects some real physical process in the world, that that’s pretty good science.
So if the models we have are the ones that have survived this kind of hindcasting of previously unknown (or genuinely unincorporated) data, and do so in a robust enough way that doing so by chance is quite low, and have plausible causation principles, then I’m not sure I see the systemic problem with relying on those models.
As far as I know, the mainstream view of the last 100 years – in terms of climate – does not change much from year to year.
It depends on how much tolerance for error you have. If you have 100 random climate models making plausible predictions, I would guess that at least a few will seem like they did a reasonably good job.
As they say, “if” is a mighty big word to be so small. But if someone showed me a model which (1) matched historical data which is known to be accurate but could not have been known to the modeler; and (2) matched the data well enough that it is very unlikely that the modeler was simply lucky, then I would take it seriously despite my earlier statements about prediction.