I don’t believe for a second that climate change is a ‘hoax’ or a conspiracy.
However, institutional bias is rampant in science, and Climate Change is the most politicized scientific subject I’ve ever seen, so it would be easy to believe that there is bias embedded in the literature.
If you don’t think biased results happen, I suggest you read the history of the Millikan Oil Drop experiment. Millikan got the numbers wrong in his published paper. That happens once in a while, and his conclusion was still correct. So that’s not the scandal. The scandal is that when people replicated the experiment, their published results would closely align with his, but perhaps skewed just slightly towards the correct number. Then the next attempt at replication would be just a bit closer to the correct number. It took a long time for the true numbers to be published.
How come? Because Millikan was a famous scientist with a reputation. When people tried to replicate his results, they started with the assumption that his numbers were correct. So when their own experiments returned a different result, they’d assume something was wrong with their setup, and tinker around until they got a number closer to Millikan’s. Then that result would get submitted, and the reviewers wouldn’t look too closely at the results because they were close to Millikan’s so they were ‘obviously’ correct.
Once a new result was published that confirmed Millikan’s numbers, that would make a different result even more suspect, which biased the research further.
Bias in academia isn’t just about grant money. It’s also about reputation, pleasing tenure review boards, not having a target painted on your back, etc.
You can imagine that in an environment where the ‘denier’ label is a career-killer, results that go against the ‘consensus’ probably result in 10 times the scrutiny as results that agree with the consensus. You can also imagine that someone looking for tenure or a promotion is going to want to find results that confirm the ‘accepted’ models.
One of the problems today in all sciences and not just global warming is that data mining is much easier than it used to be, and computer models and simulations allow the rapid testing of many different hypotheses.
Consider two published results of a statistical analysis - both of them show a very strong correlation between a cause and an effect. However, imagine that the first one started out with a certain hypothesis, but couldn’t find confirmation in the data. So the scientists tried another hypothesis, then another, then another, until they finally hit something that shows a strong correlation. So they publish that, and don’t mention all the misses or how much tweaking of the data they had to do before finding that correlation.
The other paper started out as a single hypothesis, then the data was inspected and the correlation found.
Which of those papers do you think has a stronger chance of being right?
Now imagine an environment where the ‘wrong’ answer will end your career if it goes in one direction, but be unnoticed or beneficial if the error goes in the other? What are you likely to do if you get a ‘wrong’ answer? Well, if it’s an answer that goes against the consensus, you may assume that your data was bad, or your experiment was set up wrong, or that your hypothesis just needs a bit of tweaking. But if the wrong answer lands in the ‘consensus’ region, you just say “Jackpot!” and publish.
Even if the individual scientist is completely scrupulous, that kind of bias can twist results. And if the paper has to go through internal faculty review, there’s a good chance that the same kind of bias will cause the paper to be rejected for publication or accepted depending on which way the error goes. Thus, bad results that amplify the consensus get published, while bad results that would work against the consensus are subjected to enough scrutiny that they are caught. Or maybe even GOOD results that go against the consensus fail to be published because one of the gatekeepers in the mix decides that it ‘must’ be wrong, or even that he doesn’t need the headache of publishing a ‘denier’ paper. Or maybe even that he thinks that global warming is real and critical to mankind, so publishing papers that will be used as weapons by ‘deniers’ isn’t helpful, even if they are correct.
That’s exactly what happened with Millikan, and the only bias there was the reputation of one man. The bias pressure on climate science is massive compared to that. It can’t help but affect the quality of the research. The question is how big an effect would it be, and whether it’s big enough to change the basic conclusions.
With Millikan, the scientific community eventually converged on the right answer because it was an objective fact that could not be avoided forever. But climate science deals in time frames of decades to hundreds of years, and deals with predictions of complex adaptive systems, of which there can be no ‘proof’ one way or the other. That means there does not have to be a reason for the results to converge on anything close to the same numbers - it’s all interpretation and probabilities. So biased or otherwise incorrect results could persist and even be amplified if they came from an authoritative source.