Studies do not say anything

Nothing in life is perfect. Usually, the imperfection is that people don’t check what science is saying. However, there is also sometimes the lesser known issue occurring of people checking what the science is saying, so let me give an example.

Or, you know, there’s a naturally occurring source of lighter than air gas venting up through the sand in that spot. (Which is plausibly God’s doing, but so is everything else if he made everything, so that’s not more evidence than chronicling any other part of our existence.)

Ultimately, the only part of a paper that’s worth a damn is the middle part that gives the actual details of what was done and what the results were. The other parts are, in some sense, just an opinion piece on those results and their meaning. (Not to say that a knowledgeable person’s opinion is without value - they may be laying down the solid truth and they may be able to provide context that helps to understand the experiment and its results better. But there is no law of the universe that this is so. Some research scientist in the world passed with all D’s and anyone on the internet can write and submit some paper that’s in the correct format to somewhere and have it look official.)

But now that we’re only giving any true weight to the middle part, we still have to accept that it only tells us what it tells us. Paper bags float off when placed opening down at God’s Spot. It demonstrates nothing more and nothing less than that.

https://www.reddit.com/r/xkcd/comments/1apvjzl/any_other_xkcds_that_make_showing_or_applying/

How much value to take from the research sort of depends on your ability to grade the quality of that research. You may well not be qualified to do that, and nor might be the average science journalist who is reporting on it, nor the average social media influencer, and they’re liable to take the author’s more user friendly summation as gospel, without even reading the methodology.

So you might say that the best strategy is to find an expert in the field to listen to since they have the chops (presumably) to evaluate this stuff? No, we know that doesn’t work since there are any number of scam influencers with the qualifications to know better and how are you to know which to back, or even if you should pay attention to either one?

I’ve generally suggested to people that they should follow the guidance of large organizations of experts, like the American Heart Association or the American Physical Society since anything they promote has to have been signed off on by a committee. The evidence has to have satisfied a majority of the experts.

But, even there, we know from history that sometimes the off-the-wall, long shot theory that “all the experts” poo-poo’ed on, ended up becoming the accepted answer.

Some branches of science - for example, psychology and macroeconomics - may be relatively far behind some other areas of academics (but catching up).

Still, probably the best answer is to trust associations of experts. Past that, actually read the research, mostly read the middle part, try to learn statistics and how to lie with statistics, try to be conversant in the basics of all things, but most importantly accept:

All things that things you’ve ever heard or said are just some idea some human had. That idea might eventually prove to be wrong.

Here’s something I saw over at Respectful Insolence today that seems relevant.

It’s freakishly rare for a bolt of scientific lightning from out of nowhere to overturn a large body of painstakingly accumulated and diligently replicated research. The chances of Some Guy On Social Media Doing His Own Research and overthrowing rigorous scientific consensus is zero.

There are many areas in medicine and science where I don’t have the qualifications to understand exactly how research was carried out and don’t have the skills/chops/time to evaluate that “middle part” of publications, including statistical analyses. That’s why to a large extent I depend on the qualifications and skills of experts in a particular field, to put papers in proper perspective.

The arrogance of so many people these days who resent and belittle experts is hard to believe, but that’s where we are.

Well, so, that’s where the debate begins.

First, I would submit that there is research that’s poor, that doesn’t mean what the researcher believes it to mean, and that people shouldn’t treat scientists and researchers as infallible gods.

As you say, that’s me in that thread - not a doctor - critiquing the research output of a doctor. Likewise, here I am complaining about the IHME prediction model of COVID fatalities (which I believed failed to incorporate lag times between case counts and fatality counts, and to assume a bell curve rather than a fast rise, slow fall curve that epidemiology models usually predict).

And in other threads on other topics, I have debated the merits of research, doubting the quality of their methodology and speculating on confounders, spurious correlations, etc.

How do I know that I’m qualified to do so? Am I always wrong to do this, minus an applicable degree? Can I say that public debate, hopefully populated with experts who can respond to questions and concerns, is a positive activity? Or does that spread doubt and the willingness to doubt science among those who probably shouldn’t do so?

Is there any safe way to choose individual experts to follow? Can I trust Gil Carvalho?

Where are the boundaries of taking in and communicating over raw research?

Covid completely revolutionized the public interest in and public access to scientific research in its raw form. In some ways, that was a good thing, as it gave much needed information into the hands of people who needed it. In other ways, it gave some people the impression that scientists are just guessing, when a particular study turned out to be wrong upon peer review. It used to be that only absolute experts in a field would read unreviewed studies off a pre-print server, so the large public only got the filtered version you would read in a newspaper a year later. This situation has apparently encouraged many people to consider themselves scientists, when they are not equipped to judge the merits of any particular methodology.

I believe it’s probably a good thing to „democratize“ access to scientific research, so everyone can be informed, but people should absolutely know their limitations.

Sadly some politicians and media groups simply lie without any backing evidence, which is far worse than your point.

But your average person often fails to catch it because they are paying attention to nonsense fact spreaders.

If the “experts” are saying that vaccines kill and Mexico is subsidizing Islamic terrorism into the nation then you need to hire the politician who will act on these things. That’s, on the face, pretty rational.

Right now, we have some politicians saying that we need more “Shoot first, ask questions later” options on the border, while the elected labor group of the border patrol, the leader of whom endorsed Trump in 2016, is saying that the new border legislation would be sufficient for our needs if the Freedom Caucus would pass it.

If people were paying attention to the expert group, Rather than individuals, we’d be in a different position.

I think the primary role of experts is to explain, not dictate. But this is just my opinion.

Not everyone thinks the ideal society is an enlightened democracy. Traditional systems such as Confucianism, to the best of my understanding, promote a social heirarchy where people stay in their lane. Leave the science to the scientists and the farming to the farmers. Catholicism is somewhat similar, depending on who you ask. Compare also Plato’s Republic.

~Max

I don’t think that Confucianism holds up to scrutiny but I feel like that would go too far afield of the central topic, so I’ll just leave that at that.

Let me be more explicit.

This is a straw man of the scientific method. Between methodology and the conclusion come the most important sections, results and analysis. You’ve botched it by putting results in methodology and leaving out analysis entirely. Even the hypothesis is flawed. I’m giving you an F on this project, Mr. Sage_Rat.

Scientists are experts not only in coming up with experiments (methodology section) and reporting conclusions, but also in explaining facts and conclusions. The primary purpose of a study to the general public is not, “here’s a cool idea for an experiment”. And it is not to tell the public what to think. It is to report facts and explain how they lead to conclusions.

~Max

IMHO it’ a good example of two ways of trying to understand the world. There’s the authoritarian “believe what I say because of who I am” vs. the authoritative “believe what I say because the evidence says this is the best explanation we currently have, and here’s the evidence”. Confucianism is just one example of the authoritarian model of seeking knowledge.

Part of what’s going on is that authoritarians present the greatest advantage of the scientific method, the willingness to throw out an old explanation when new evidence shows it to be incorrect, as a flaw.

I wouldn’t describe Confucianism as authoritarian in that sense. The gist is not that “experts are infallible” so much as “don’t be a backseat driver”. Like, if you aren’t in the field of medicine, maybe you shouldn’t be reading medical journals with an eye to challenge the research in the first place. But there’s a reciprocal obligation of the medical establishment to do right by the rest of society.

~Max

Adult-onset lactose intolerance (LI) took until about 1970 to be widely recognized as a real thing. Once that happened, teams of researchers went off around the world measuring the percentage of LI in countries and tribal peoples (the term used then), as well as doing experiments on undergraduates. A flood of papers were published in every medical journal, along with a variety of anthropology journal articles tracing the origin and spread.

Almost none of this made it to the general public, except for one Scientific American article. Lactaid pills didn’t appear until 1984, the milk later. Lucky me, I got diagnosed in 1978. I had to do my own research, before the internet.

In 1983, I decided to write the first book on the subject. The basement of my local School of Medicine was filled with old bound copies of every major medical journal world in the world. I started reading. A year later I had read every article on LI ever written, as well as all the anthropological literature in another school library. I may have been the only person in the world to do so.

And at least 50% of it was sheer junk. Those wandering researchers used different tests prepared in different ways on non-randomly selected populations creating results that could not legitimately be compared. The statistics used were so basic that I could judge them and find them wanting.

An incredibly tiny proportion of the studies contained information that would be of any use to the ordinary consumer. Almost all of them were written for no better reason than to gain a credit for their CV.

Humorously, when genetic studies finally rolled around several decades later, the geneticists roved around the world in the footsteps of the anthropologists and came up with the same conclusions about the spread of LI - without any credit to the studies I had read thirty years earlier.

You might say that I became jaundiced as to the value of the average peer-reviewed printed in prestige journal article. Unquestionably, other articles on other studies in other fields are life-saving. And unquestionably, only an expert extremely well read in that field will be able to spot them.

Interesting. Did you find a correlation in the quality of a study and how frequently it was cited with approval? Because that’s tracked pretty well in the modern day and it’s a good indicator of consensus in my experience. Or in other words, did you find a consensus in 1984 and was it quality science?

~Max

In theory, that’s true. But there’s that thing about theory and practice.

(You forgot Disclosures - probably the most important section other than the middle.)

Well, it can be. But there are a heck of a lot of studies where there’s simply nothing to disclose.

The ones most likely to end up out in the wild are probably the ones most likely to have a disclosure hidden down at the bottom.

Maybe it’s not particularly relevant in sociology but for research into supplements, hair tonics, beauty procedures, etc. - the sort of thing that a normie might look up - they’re pretty rampant and good to pay attention to, I’d suspect.

Or for something like border patrol, knowing who you’re reading and what biases they might have - generating your own mental “disclosures” - is also pretty good to do.

Or the effectiveness of hydroxychloroquinine for covid. If a researcher is affiliated with a company that sells hydroxychloroquinine, then that’s definitely a relevant disclosure. If they’re affiliated with a company that sells other anti-covid medications, that might be. But if they’re not affiliated with any drug company at all, what are they supposed to disclose?

And of course, the real problem isn’t when people don’t read the disclosures. The real problem is when a study should have disclosures, but doesn’t. How does a layman tell the difference between a study that doesn’t have any disclosures because there’s nothing to say, and a study that doesn’t have any disclosures because the researchers unethically left it out?

Sure, but 100% of people who declare a conflict of interest have a conflict of interest. The rest are a lower percentage.

Databases of frequency of citation may have existed in 1984 but I never saw one.