One limitation of science that I try to keep in mind is that the standard is, I believe, one standard deviation, which translates to saying that a study is valid with 95% confidence.
Or, put another way, of every twenty correctly constructed studies you read about in the news, one of them is mere coincidence. Not because of malice or incompetence on anyone’s part, but because that is the way it works.
Also, that the messenger often has an agenda of his own. It is remarkable all the startling studies that come out during sweeps week in my local news market. IYSWIM.
Of course, the flip side of that is that not every true thing can be cited.
It depends; what’s their track record ? How relevant are their credentials ? How much evidence is there, and how good is it ? Do they have a known or plausible agenda to lie ?
And if you are wrong ? Your “standard” sounds suspiciously like short sightedness pretending to be skepticism; an attempt to shovel any sacrifice onto the next generation ( and in the process make it much larger ). It’s not like there’s any particular connection between self indulgence and truth, or scale and truth.
This seriously looks like a bad thing until you compare it with the rates of demonstrable success or practical application for alternative methods of drawing conclusions or constructing theories.
First, I think the source of the problem has to be considered before accepting the experts conclusions. Flight, for example, is a relatively easy problem to solve- we’ve been flying for almost 100 years, we understand the principles and mechanics of lift, we can build a variety of different vehicles that can overcome the challenges of flight, our government rigorously tests their quality, etc. Think of the leaps and bounds achieved in flight, where we went from start to the moon in less than 75 years.
Climate change and AGW are based on the far more complex problem of the planetary weather system. By contrast, the best predictive abilities we have with regards to weather haven’t improved a whole lot over the statistical analysis of previous history (a la Farmer’s almanac). With an increase in technology equal to that which landed us on the moon, we still only have a pretty good idea of weather pattens in the coming 24-48 hours (and even those can still sometimes change in ways we don’t/can’t anticipate). That’s okay, I understand a little bit of the complexity involved in the problem, I know it’s not understood nearly as completely as simple Newtonian physics, so I can accept a little bit of uncertainty with my information. But since we don’t have a full knowledge of the problem, we’re mostly limited to hoping to understand the majority of the inputs, measuring the outcome and scratching our heads trying to connect the dots. People who claim to ‘understand it all’ , no matter the side, tend to sound like they can tell you the weather in Seattle on June 8th, 2016.
It’s obvious the planet is warming rapidly, a wide variety of empirical evidence supports this, but how much is actually caused by human activity- 100%? 70%? 40%? and what factors are contributing the most heavily?
Well, this isn’t entirely true. You’re quite right, a common standard minimum for declaring an effect statistically significant is indeed the 95% interval, but that isn’t to say that all studies’ outcomes have this significance. A good study should quote its P-value, i.e. the probability of a result occurring given the assumption that it is the product of chance alone. You frequently get much, much more significant results than p=0.05.
There are other things to take into account, though. For example, if you measure a whole bunch of outcomes, and then look through them to see if one shows a statistically significant effect, your significance is decreased. The more things you test, the more likely you are to get outliers. If you measure twenty things, then go looking for a statistically significant effect with a p-value of 0.05 or less, the chances are you’re going to find one just by chance. A common example of this is in political polling, where large numbers of people are polled, then pollsters try to pick out subgroups and identify trends. For example, I might poll 1,000 Welsh people on a question of attitudes to immigration. I find that on average, Welsh people are no more likely to dislike immigrants than anyone else. But I look closer, and see that, say, Welsh schoolteachers are 20% more anti-immigration than the average. Is this a significant result? Probably not, because there are so many possible subgroups I could construct. I wouldn’t know until I took another poll. The lesson is to make very sure you know what your hypothesis is before testing it, and not to let it arise from the same data you use to confirm it. This sort of error is surprisingly common in all sorts of fields.
Actually, 95% confidence lies +/- 2 standard deviations from the mean (1.96 to be exact).
What I don’t understand is how this applies to a broad study of scientific articles in general. If I read 100 articles about climate change, 5 of those articles are only proven ‘right’ (or ‘wrong’) only by coincidence? Which ones? How would you determine that? Won’t they all be making slightly different assumptions and thus conclusions? What mean (I’m guessing conclusion?) and standard deviation are you judging the research by?
I think this is a misapplication of statistics. The “95 (or 90 or 80)% confidence” refers to the data of the study itself, not as a comparison between studies. It can either relate to the predictive ability of the study within a range or the likelihood of seemingly outlying data still being supportive of the theory as just a rare (but occasionally expected) occurrence on either tail of the data.
No, that studies with a 95% confidence interval have a one-in-twenty chance that their results are due to coincidence. Even if they are properly designed. You can’t use coincidence to prove something right or wrong. I am not sure what you mean by that.
Well, you don’t know - that’s the point. That’s why scientists respond to some study showing some anomalous result by asking that the results be replicated.
Not if they’re properly designed.
All that should be included when the research is published.
Part of my point is that that kind of information is often not included in articles in the popular press.
It’s hardly a reason to discard science altogether - but it is a limitation that needs to be kept in mind. Science cannot really arrive at truth, only successively closer approximations of it. In practical terms, that usually works out (eventually) well enough in areas susceptible to the scientific method. But more in the long run.
I guess what I am saying is that I have seen enough stuff where “breast implants cause lupus” or 'Alar is killing our children" or what have you to be as enthusiastic about a lot of topics, even if they happen to be politically correct.
Again, this is mis-stating things somewhat. If you’re talking about confidence intervals, there’s a ~5% chance the true value lies outside the 2-SD range. But that’s not the chance that the null hypothesis is true. For example, let’s say we were trying to show people called Steve are taller than average. The null hypothesis is that Steves are generally of average height.
So we set up a study. We measure a bunch of Steves, and we get an average Steve-height of 10cm above the norm, with a 95% confidence interval of ± 5cm. So there’s a 5% chance that the average height of Steves is outside the range +5cm to +15cm. That doesn’t mean there’s a 5% chance that Steves, on average, are of average height. It just means there’s a 5% chance this data didn’t really prove our hypothesis. We’re trying to find the true value for average Steve-height; it could be literally anywhere, and all that 5% tells us is the probability that it’s outside our 2-SD range. It is not the probability that the average Steve-height is the same as for everyone else.
So when you look at a paper using 95% confidence intervals, there isn’t a 5% chance that it is wrong. There is a 5% chance that it did not really prove what it says. This is a very important distinction.
You’re absolutely correct in that there is no connection between self indulgence or scale and truth. But there is a connection between certainty of claims and potential profits. Certain persons stand to gain much power and money through the “green revolution”. If anyone stands to gain considerable amount of anything because of a particular claim, some skepticism (and yes, some selfish short sightedness on my part) are in order.
I think in the case of AGW, there is at least as much self-interest involved on the part of the deniers (a term I’m not entirely comfortable with, but nor am I comfortable with “skeptics”–anyone have a truly neutral term) as there is on the part of the believers (another uncomfortable term). How do you evaluate self-interest in such cases?
Note that this sort of thinking is one of the key elements of the opposition to vaccines: Jenny McCarthy and her fellow scientists believe that vaccines are pushed on us by a medical industry that profits from the vaccinations.
This is incorrect - science can in fact arrive at the truth, and often does. (For an example, look up which science says will happen if you lift up a granite rock and then drop it with zero inital velocity in earth normal gravity in one atmosphere. Will it drop, or will it hover? Then try it and see if it’s only right 19 times in 20.)
What really happens is, that occasionally science is wrong about things it predicts or supposes. This is fine with science; it’s built to accept the possibility that one day even gravity will be shown to work in an unanticipated way in some esoteric situation. This may sound bad, but it’s actually pretty realistic since we’re generally talking about situations where there are a lot of unknowns. (Notably confidence levels come from statistics, where by definition you are dealing with a small sample where the vast majority’s properties are unknown.) The time to start getting dubious is when somebody claims more certainly than the information they have would allow, like say speaking with complete confidence about an afterlife they’ve never experienced, or whatever.
I think what this should really show you is that just becuase the media claims that science has shown something, doesn’t mean that is has in fact been scientifically demonstrated with nearly the certainty the media is imparting to it.
It is entirely possible - highly unlikely, but possible - that, by coincidence, the random motion of the molecules could coincidentally cause it to hover.
It depends on what you are testing. It is also possible that the theory of gravitation is wrong, and that some data will be discovered tomorrow that another theory will explain better.
Occam’s Razor says that scientific theories are not true; they are convenient. Once you start asserting absolute truth, you have left science and entered philosophy.
I think (with 95% confidence) that the most likely correct and most complimentary way to read this is as an irony-saturated concession, because the first sentence is just silly and the rest agrees with me completely.