Loveless version: examining things yourself vs. taking other people's word for it

The other thread is, in my opinion, a complete freakin’ trainwreck, so I figured I’d start a new thread. It’s a question I’m pretty interested in.

The great virtue of the scientific method lies in peer review and repeatable, testable results. Astrology doesn’t really lend itself to peer review, it’s difficult to repeat results with prayer, and it’s hard to test the workings of aura detection. But climate change data are subject to peer review, vaccination links to autism are testable, and the effects of a new cancer treatment are repeatable.

But I don’t do any of that myself. I’ve never tested a vaccine before I’ve taken it. I believe that anthropogenic climate change is occurring even though I’ve never read the raw numbers. If I got cancer, I’d undergo the treatment my doctor recommended–probably after reading some articles about it online, but certainly not after conducting my own double-blind randomized multi-institutional studies of the treatment.

Often global warming deniers will get up on a high horse about how the supposed experts aren’t experts; I had one guy say, apparently in all seriousness, that it only took about 30-40 hours to learn everything a body needed to know to evaluate claims of AGW. Vaccine skeptics will trumpet, “Read the data!” as though any fool reading the data will conclude that vaccines cause autism (which, to be fair, seems to be true).

These positions strike me as absurd. Although the virtues of the scientific method are in peer review and repeatable, testable results, I nor anyone else has the time, energy–or motive–to review, repeat, and test all results before accepting them as probably true. A scientist within the field needs to be skeptical, needs to review the data, needs to attempt to repeat and test results.

But I, as a layperson, do not. What I need to do is to understand the process, and to evaluate the process. I need to watch for places where there may be corruption or bias in the process, and take results from corruption or bias with great skepticism. But I know the process is the greatest predictive tool in human history; and when I can’t see bias or corruption in the process, I am pretty freakin’ accepting of the results of the process.

Thoughts?

I agree with you that the other thread is a mess. I don’t think there’s going to be all that much dispute over this unless someone wants to make a similar faith-based argument here, but I could be wrong.

I’ve seen this position attacked from the side of skepticism, e.g., AGW skeptics or vaccine skeptics, before. They think that I’m a sheeple or some such because I don’t dig down in to the raw data of scientific studies, I think, suggesting that I’m placing blind faith in scientists.

This is true to a point. One has to have faith in the process and the people participating in the process because there is no realistic alternative. However, even in peer reviewed studies that have been done in good faith, at least one third of the published reach a false conclusion. That is because studies that find a counter intuitive conclusion are more likely to be published and statistically because of the amount of studies done. Add to this not all studies are done in good faith and peer review just means that nothing false jumps out at the peers, they do not replicate the research.
This should lead us to humility about our opinions since the best tool we have to find out emperical truth fails us so much. However the nature of arguementation is such that arguements usually break down into how the debater is on the side of science and rationality and anyone who disagrees is a flat earther who is ignoring all of the evidence.

You know I gotta: cite?

It’s certain that the peer review process does not lead to the publication only of “true” conclusions (and whether one could even dichotomize research publications in such a way is an interesting question) but I’ve not seen any data before to quantify this. Do you have some sort of cite to the idea that 1/3 of published studies reach “false” conclusions?

ETA: In general, by the way, we follow similar processes in making decisions even outside of the realm of science. For instance, we work with contractors and rely on their opinions and beliefs when making changes to our house, even though many such decisions are the result of their own experience or beliefs in their field at the time rather than the result of empirical analyses. We purchase shoes that promise to tone our butts even though these claims are not (to my knowledge) empirically validated. It is not possible for us to spend our time evaluating each claim.

Same here-where did you read this?

What some people, largely fundamentalists and social conservatives in my experience, don’t understand is that the world is complicated. They expect that for instance evolution, global warming or whatever, is a house of cards that has no real evidence for it. Just a webwork of flawed assumptions based on misinterpreted evidence. They think that by poking a single hole in any bit of evidence supporting it will bring the whole thing falling down.

So they look for anything that can be painted as incorrect and hold it up and then say, “See, it’s all a fraud!” The Mann Hockey Stick Graph is an example of this. The whole problem with this technique, is that the evidence for Global Warming is so pervasive and wide-ranging that nitpicking one little thing isn’t going to overturn it. It would take real science to explain why it isn’t happening.

I have a fundy-friend and we had a facebook debate about evolution and he literally believes that evolution is an atheist counter-religion to Christianity. He would try to poke individual holes with a certain fossil or whatever, because he didn’t understand how utterly well documented evolution is. Saying that fossil X is a fake doesn’t mean that the other twenty million fossils are fakes too.

This misunderstanding of how complex the world is makes them think that anyone can look at the data and suss out where it’s flawed. I can’t look at a thousand lines of PHP and tell you where an error is. Why should someone think they can look at a complex journal article and overturn millions of man hours of scientific understanding?

Here is the cite which claims most published findings are not true. It is a little strong for me so I watered it down to one third. Here is some math which I got from Alex Tabarok. 1000 hypothesis where 800 are false and 200 are true. Standard statistical practice will results in 40 false hypotheses being confirmed by the research. Designing valid studies is hard so probably the best you can do with the 200 true hypotheses is to find 60% that are statiscally significant. So out of the statistically significant studies one third are false.
This does not take into account the bias of journals to publish suprising results and false results are more likely to be suprising.

It is amazing the way people with only a tiny knowledge of science point to some factoid and feel that they’ve cleverly broken a scientific theory. Simple examples would be the many who ridicule global warming because it snowed a lot last season. I think the Dunning-Kruger effect has a lot to do with it: The less one knows the more one thinks one knows.

There is a different but related effect. People who actually do have strong skills in a specialty may conclude that they are therefore adept in broader areas. On other message boards I’ve seen electronic technicians, probably quite competent in that field, denouncing climate change, or espousing bizarre economic theories.

Perhaps this is a better way to put it. From what I know for the process, it does have some merit, but some failures also. Since you request leaving Love out of the equation, I could challenge the process on the philosophical theory that reality may be at least in part subjective, and I would have some pretty strong supporters of subjective reality.

Oh boy. Just dangle the bait in front of me Left Hand.

I agree that finding a problem with one bit of data concerning evolution does not weaken the theory as a whole, or strengthen the thoroughly disproved theory of creationism.

I don’t agree that this applies to any discreet conclusion. Scientific method itself requires constant questioning and retesting of previously accepted science. We constantly learn new things and refine our ability to measure, and history is replete with completely accepted science that turns out to be wrong when new facts arise.

And there are massive areas of soft science that do not resolve issues based on mathematics or physical principals, but simply draw conclusions based on consistency of observations. And those observations will tend to be subjective in nature.

If you take a look at the widespread business of medical research grants, you will see an abundance of bullshit presented as science. The latest scam is to statically analyze previous research to draw new conclusions. As if compounding errors tends to make them go away somehow.

Finally, people are often willing to accept and disseminate information without any understanding of the underlying basis. It’s not hard to find numerous people who will claim something is established science because they read it in a book, but then be unable to justify their claim based on simple tests of logic. It’s amazing how much anger can be provoked when challenging beliefs, that the believer is convinced has a scientific basis. Sometimes the scientific basis is faulty, sometimes the believers understanding is faulty. But it doesn’t look much different from the reaction of those whose religious beliefs are challenged.

The issue is complex, but while I’m largely in agreement with your position on global warming and the like I have caveats. We are all taught in school that scientific results are testable and repeatable, but few classes delve in depth into the process. I’ve known a lot of people who got into a research or Ph.D. program and were quite surprised at the extent to which what they saw firsthand diverged from what their high school textbooks lead them to expect. I was one of those people. While I’m skeptical of the statistics in puddlegum’s link (the author appears to use estimates without much grounding), I would lean towards the idea that there’s a probably in a fair share of published results. This recent article in the New Yorker looks at a variety of prominent cases in which initial results could not be replicated.

One major instance is the issue of the psychiatric drugs known as SSRIs used to treat depression. There’s a growing body of evidence that they perform no better than placebos in most cases. However, getting the facts out there and convincing professionals to act on this knowledge is proving amazingly difficult. Americans still spend billions per year on these drugs despite the more recent evidence. Many folks seem to assume that since these drugs passed the scientific barriers once there must be something to them, no matter what the more recent studies say. But if we were actually willing to look at the original studies we might see, for instance, that the drugs only had the desired affect on the most severe cases of depression, and perhas were not even tested adequately on milder cases. That’s the advantage in looking at the studies rather than just at the pop media summaries, or blindly trusting the experts who are supposed to follow the studies.

There’s also the issue that not all fields of study are created equal as far as our abiity to be purely objective. Generally its much easier to exactly replicate results in the physical sciences than the social sciences. In fields such as psychology or behavioral economics the researchers may deal with concepts such as happiness, anger, shyness, etc… But who decides how these things are defined? Obviously each individual researcher makes such a decision when doing his or her study. But one definition may vary from another, and it may differ quite a bit from our everyday understanding of the term. So it’s often worth reading the original material to see exactly how they are defining their terms.

I hate that shit. Someone I know was at the Telluride Bluegrass Festival last week, and it snowed (which, yes, is unusual, but we’ve had a pretty cool spring here in Colorado, relatively speaking), and she was all like “what global warming? hyuck hyuck hyuck.”

The need to break our understanding of reality down into black and white, simplistic explanations makes it difficult for people to comprehend or tolerate the complexities of much of the science around us.

Emphasis added by me.
Say what? Cite?

Wait (on edit), are you claiming that any fool reading the data comes to that conclusion? Or are you claiming that the data seems to show that vaccines cause autism?

The former–it was a joke about fools reading data :).

Which brings me back to puddleglum’s cite. I’m not qualified to analyze this statistical breakdown of studies. So what should I, as a layperson, do with it? It strikes me as unbelievable, but whereas puddleglum confronts its incredibleness by weakening the stat and calling it a day, I’m more inclined to dismiss it out of hand as the product of poor statistics–even though I don’t understand the statistics. Instead of following my inclination, I’d prefer to read the peer review of this claim. Do other folks consider this author’s statistical analysis valid? Has it been repeated?

edit: kanicbird, you’ve got a whole other thread in which you can stare at your fingers and marvel at, like, the grooviness of God or whatever. I’d prefer you not bring that act over here; I started this thread specifically to avoid it.

puddleglum’s’s cite doesn’t surprise me a bit. I’m not sure of the exact numbers, but lots of incorrect data gets through. Remember, peer review is not evaluating the experimental results, but more the process which led to the results. If a reviewer for a paper in an area he doesn’t have direct experience with sees results off by 50%, he’d probably never notice. First, reviewing is one more job, and reviews are of widely varying quality. Second, some reviewers can be influenced by the name on the paper. I believe Feynman noted that the first paper giving the mass of an electron was off, and the true value in subsequent papers by others migrated to the actual value over time.
If you’ve ever seen five reviews of one paper, with one calling it great, one calling it crap, and the other 3 just saying “interesting” you wouldn’t put so much faith in peer review.

However, that doesn’t mean that science does not work. Most papers will sink into the bog, totally unreferenced. Work on important subjects will be repeated, and that is where the self correction comes in. Mistakes in papers no one cares about will stay forever, mistakes in important papers (remember cold fusion) will be rapidly caught and corrected.

Forgot my example. When I was in grad school someone who had just gotten a PhD published a paper in the most prestigious journal in the area saying that his algorithm was optimal. Neither he nor any of the reviewers got that this would have revolutionized complexity theory and prove that P = NP. Two issues later there was a flurry of letters, including one from the author, saying “oops.”