How do you determine "The Truth"?

Everything in the past was true.
The present goes by too fast to determine what is true.
Anything could be true in the future.

It’s extremely unlikely that lies will survive peer review.

However sadly today simply repeating lies is seen by some as ‘making it true’ e.g. ‘the election was fixed’.

It may be reasonable to state that lies generally don’t survive peer review. But the degree to which falsified or manipulated data in published research has been exposed in recent years makes “extremely unlikely” sound wildly overoptimistic. Just look at The Lancet, which notoriously suffered major embarrassment over the fraudulent Wakefield MMR paper, then had to retract a paper last year claiming that hydroxychloroquine treatment for Covid-19 was dangerous*, based on data from a patient database that may not even have existed.

There have now been at least 75 Covid-19 papers retracted for a variety of reasons (including but not limited to overt fraud or lying) from scientific journals, two involving Surgisphere “data”.

*note that effectiveness of HCQ for Covid-19 was grossly overstated or invented by numerous folks in and out of the scientific community, including he-who-shall-not-be-named.

Facts I embrace, for they can be adjusted/corrected as information is added. The Truth on the other hand I stay away from as much as possible for it is religious in nature, bound in feeling instead of thought.

I avoid “The Truth” whenever it is capitalized.

Then give me back my pamphlet.

As both a reviewer and someone who has edited journals, you have too high an opinion of peer review. Lies won’t survive the nasty letters to the journal after junk sneaks through.

Example: In the field I did some of my graduate research in, someone published a paper in the most prestigious journal proving that his linear time algorithm could optimally solve a problem known to be NP-hard. The very next edition had a retraction letter from the author and from others. It wasn’t a lie, it was a mistake, but reviewers are often busy, don’t get paid (rats) and sometimes let things slip through, especially if they are assigned papers they’re not really experts on.

A huge part of it for me is networked consistency and explanatory power. If a fact is alleged and it jars with everything else I know I am more sceptical than usual. But if it slots neatly into and is consistent with everything else I know, and explains things, then I’m far more likely to believe it. It’s a consensus reality, in my own mind.

The risk of that is if your network lacks diversity, you could be at risk of “groupthink” or being “in a bubble”. That’s part of the problem now with each “side” only taking information that is already consistent with their world view.

Hey, welcome back! I know you were reluctant to give examples, so as not to poison the well, but maybe you could do so now? Because the answer is different if you’re talking about philosophical truth, or scientific truth, or just looking for accurate daily news information.

Definitely a risk.

But my experience is that both the world and people are bad liars. If something is false, it will usually start to clash with the rest of reality and its falsity will usually become apparent.

But unfortunately for the reason you identify amongst many others - only “usually”.

“Oceania had always been at war with Eastasia.”

~Max

Seconded.

Peer review does a good job of removing the chaff, but what is left is not necessarily all wheat. Peer reviewers don’t generally double check every last bit of data, they mostly take the authors at their word but point out glaring problems. Even then just because something showed up as significant in an experiment doesn’t mean that it’s real.

A very large percentage of peer reviewed papers post results that end up not being replicatable. Sometimes the observed result was just a fortuitous bounce of the data, or some other covariate that they failed to account for. This isn’t a problem with science or a problem with the peer review process its a problem with public expectation. The headline will read 'Eating blueberries will cure cancer", when the actual paper found that in a set of 6 out of 9 mice an injection of protein X into the eye showed a 20% reduction in retinoblastoma tumor growth compared with 1 out of 9 mice treated with saline, and in the discussion its mentioned that protein X is found in blue berries.

What will then happen is that other people will try to investigate the mechanism of protein X, or how protein X interacts with protein Y and publish their own findings. It may turn out that none of the rest of the papers find that protein X does anything . The finding was probably wrong, not faked, not malpractice, just incorrect. Noone will get fired or have their reputations trashed, but the finding will be largely forgotten and ignored by the community at large.

However if it these other studies also show that protein X does reduce tumor growth, then it will be generally accepted by the scientific community. At that point it IS very unlikely to be incorrect.

In fact this effect is inevitable. If you publish if your experimental results have a less than 5% probability of being due to chance, over all publications a lot won’t be replicable. You’d only not have this if you enforced tighter p values, which means fewer papers will get published, or make it standard to reproduce the experiment, which might be impracticable if it were a long term or expensive one.

Yes and in the process you would miss out on many interesting results for which the data was trending in the right direction but just didn’t have the power. My personal feeling regarding p-values is

p > 0.1 Ignore
0.1 > p > 0.01 Interesting and worth exploring but may be false
p<0.01 probably real

Which reminds me of another caveat, that just because a finding doesn’t replicate in a second experiment doesn’t mean its false. It may be that that experiment didn’t do it right, or that it didn’t have enough power to detect it.

This comes up a lot in my line of work with genetic data. We publish a list of 63 genes that are different between the two diseases. Since we looked at tens of thousands of genes we have to be very strict in terms of how we make our list, otherwise we will have a lot of things making the list just by chance. Meanwhile some other group working on the same problem also publishes their list of 42 genes that are different. Then when people look at the two lists they find that there are only 2 genes in common and conclude therefore that the results aren’t replicabale. In reality their genes are all good on our data and our genes are all good on their data, but they just weren’t good enough to make it past the strict thresholds both groups had to set in order to avoid random garbage.\

ETA: OK, too much inside baseball, but I needed to get if off my chest.

I was thinking more in terms of interpreting media information to make informed political decisions.

Some cases, it doesn’t matter what “The Truth” is because some people believe what they want to believe for purely partisan reasons. i.e. I have some right-wing Facebook friends who are enraged with Biden because the price of lumber has gone up. I couldn’t tell you if he enacted any specific policies that caused that though.

There’s an article in the NY Times right now about how demand collapsed during COVID and now the problem is supply:

That probably explains the rise in lumber prices. Basically, you can ignore anything right-wingers are complaining about and just read the NY Times or the Washington Post (or NPR, or the BBC for more international news). You’ll get 95% of The Truth, very little bullshit. You’ll be close enough to The Truth that you wouldn’t have to worry about the rest.