The old fashioned kind of database, i.e. the library. Science libraries generally subscribed to the “Science Citation Index”. It is much better, of course, when all that information is online. Note that it is not necessarily an indication of quality, as some widely discredited papers will have a high citation count.
I remember that just in time for my second book on LI the Physician’s Desk Reference, a compendium of every drug and all ingredients, used by every pharmacist, came out on CD-ROM. That made it possible to search for every use of lactose in medications, a lot because it’s a inactive ingredient that’s slightly sweet instead of bitter. In small quantities it’s unnoticeable. Nevertheless, some people with LI and some people with milk allergies* needed this information and many more liked having it.
The PDR is thousands of pages long. No way would I ever do the job on paper. Even today I bet it would be easier to do that search on a CD than online.
*Back in those days there had been reports that at least consumer grade lactose was contaminated with other milk byproducts. Pharmaceutical grade lactose should theoretically have been pure, but people didn’t want to take chances.
It has become far too easy to tell major, damaging lies with few or no consequences. People are allowed to go on air and say things that, if believed in their entirety, can severely hurt or even kill those who hear those things.
This is likely the biggest strawman OP I have seen for a while. There is no such scientific peer reviewed study.
Exactly
No, it comes from knowing what sites and journals have rigorous peer reviewing.
All those vids you show also have nothing whatsoever to do with scientific studies.
Correct. Some time ago, I found and posted here a study that showed that HFCS had a lower satiety rating than cane sugar. I mentioned that again in a post here a little while ago, Someone contradicted me with two studies that said the opposite- but both were funded by heavily biased companies. Now, I admit that maybe HFCS isnt so bad on the satiety front, but those studies were not conclusive proof. I also posted studies saying that MSG does not give headaches- only to be assured that in their special case, it did.
There is also an argument that the Pope is Catholic and that Russell’s teapot doesn’t exist.
If that’s your takeaway from the OP then I’m not sure that you’re on the right page.
That’s certainly good. Anyone could format their paper to look super formal, super scientific, look like it was published in Nature, etc. and just post it to their Wordpress blog. That’s how con artists work. They do the work to create a fake that looks too complicated to be a fake. You have to know that it’s coming through review, for real, not just that it looks appropriate.
But I wouldn’t say that’s complete.
We’ve had, on the sight, plenty of cites over the years to proper scientific papers that demonstrate that polygraphs are inaccurate and that it’s relatively easy to learn how to defeat them. But, likewise, there’s equal quality research that says that they’re 98% accurate. We can also assume that the average crook on the street hasn’t learned how to defeat a polygraph.
Many a poster has seen some paper showing poor results for polygraphs and, reading that summation, come here to say that the things don’t work. But if you look at those papers, they’ll explain the methodology and results. “30 test subjects were given a colored marble. We asked them to secretly pull a card from a shuffled deck and, if black, to tell the truth about the color of their marble and, if red, to lie. Polygraph failed to detect liars at x% reasonably high number and mistakenly believed truth tellers to be liars at much smaller y%”. And now, all of that is true. There’s no reason for a reviewer to not put the stamp of approval on that and ship it out, but let’s say that the title, abstract, and discussion read like:
Are Polygraphs Effective?
Abstract - Polygraph was used to detect lies among 30 subjects. We found x% false negatives were reported and y% false positives.Discussion - Given the high value of x, we find that polygraphs may not be reliable detectors of truth. We recommend further investigation.
And now, this gives a very different picture. If we assume that the method was highly robust then, sure, we can take away from the study that we may as well break out the Ouija board as use a polygraph. But was this a robust methodology? The only hint that it isn’t is the 30 subjects, “may not be”, and “we recommend further investigation”. People and journalists tend to overlook all of those things. “We recommend further investigation” should be ignored and is meaningless in a field devoted to always digging deeper.
If we don’t read the middle, then we don’t know what was actually done and how much weight to take from the paper.
Let’s also note that peer reviews are still just bringing the research up to the standards of the reviewers. Maybe none of them will have seen anything in the methodology that seems to lower the value of the results. That doesn’t mean that I can’t look at it and say, “Wait, doesn’t polygraph detect stress markers? It’s not a brain reading device. The idea that it can be used as a lie detector comes from the assumption that people will feel stressed out about being asked about something they’re guilty of and afraid of having be revealed. Minus fear and guilt, would you actually detect anything?”
The paper demonstrates that, minus stress, it doesn’t work as a lie detector. That might be relevant for its use against sociopaths but it doesn’t necessarily apply to the average criminal suspect. It’s good information to know, it’s good that there’s a study of this type, but it’s not the end-all-and-be-all of research on the topic.
If you don’t read the middle, you don’t really know what was being measured.
If you want your argument taken seriously, dont start with a giant glaring strawman.
As said, I’m not sure that you’re on the right page. As I read it, you would just as well complain that cartographers have colored Indiana pink when, clearly, that’s not the real or even an appropriate example color.
Yes well, if you leave out the most important parts of a study by only looking at the abstract and conclusion, then the study is not going to be worth much to you. In fact most papers include the conclusion and hypothesis in the abstract. I admit the conclusion follows from the premises: the value of a scientific paper depends on thorough methodology and analysis, if you only read the abstract and conclusion you do not read the methodology or analysis, therefore if you only read the abstract and conclusion you do not reap the value of a scientific paper. I further admit, most citations of studies in news media only mention the conclusion of a study (and if they’re up to snuff, who performed it). That is because news media and guest experts serve a different audience and a different purpose than the actual science cited. A layperson investigating a study they heard in the news would probably benefit most from the background section, I think, as it tends to be most informative and often describes/cites the scientific consensus to date.
~Max
This specific skepticism is not directed at your hypothetical study’s methodology. You are actually criticizing the hypothesis.
Here’s an abstract from a real-life polygraph study1:
This laboratory study dealt with real-life intense emotional events. Subjects generated embarrassing stories from their experience, then submitted to polygraph testing and, by lying, denied their stories and, by telling the truth, denied a randomly assigned story. Money was given as an incentive to be judged innocent on each story. An interrogator, blind to the stories, used Control Question Tests and found subjects more deceptive when lying than when truthful. Stories interacted with order such that lying on the second story was more easily detected than lying on the first. Embarrassing stories provide an alternative to the use of mock crimes to study lie detection in the laboratory.
The hypothesis is implied by the test described in the abstract, but it is given explicitly in the introduction:
It was expected that subjects lying about their own stories or telling the truth on another person’s story would be classed accurately on the basis of their physiological responses.
~Max
1 Bradley, M. T., & Cullen, M. C. (1993). Polygraph lie detection on real events in a laboratory setting. Perceptual and motor skills, 76(3), 1051-1058.
I don’t believe that I stated a particular target?
And I should note that I’m criticizing readers, not writers. It may well be that, in a real paper, there are allusions to and callouts to things that the reader should be concerned about.
In any endeavor where the idea is for the writer to compress and summarize their information (which is, generally, what you do at the start and end of the paper), that you’re going to lose something. What that ends up being is going to be a bit hit and miss.
As a reader, though, there’s reading as a skeptic and there’s reading as an acolyte. The acolyte doesn’t question things, doesn’t pay attention to nuance, doesn’t try to come up with counter-theories, etc. We should all be reading the middle part and try to take more away from it than that they did X and Y was the result. Maybe that applies to similar situations, maybe it doesn’t.
To be sure, there’s also the problem of contrarianism for the sake of contrarianism but I feel like that’s more a niche case when it comes to the average consumer of science news.
To be a contrarian, I’ll declare without shame that I don’t always read the middle part, not in full critical detail, yet I do read critically.
I am not an expert on statistics and I am indeed trusting the reliable respected journal to have vetted that the statistical tests chosen were valid. Most of the time I am reading the middle in a focused way, looking to have a specific question about the study answered. Most of my questions and criticisms are formulated well enough by reading the abstract and the discussion sections. The middle is then to follow on those questions.
I’ll also cop to trusting expert consensus on things that I do not have the depth to be expert on myself. A single study is not conclusive of much. The broader body of work that it exists in matters more and experts in the field appreciate that context better than I usually do.
I can delve deep enough to doubt those consenses only so frequently.
Knowledge is a social endeavor, a team sport. I have teammates I trust for various reasons.
I wouldn’t necessarily say that. I have share a number of patents with the US government on a method to distiguish two type of cancer. I also study the differences between those two types, and publish papers about it. Whenever I publish a paper I put in the disclosures that I have a patent related to it. Because I’m supposed to, and because I guess that showing that the distinction is important might make it more likely that people will use my method. But honestly while there was a couple of years where someone did license it and I got a whopping 600 big ones, it unlikely that anything I write is going to make any difference as to whether that will ever happen again, and certainly it hasn’t influenced the way I analyzed the studies data.
Disclosures are just that. They indicate anything that could be related to the finances of the author and anything in the paper. As to whether this actually represents a potential comflict of interest is another question, and an even further question is whether this influenced the research.
Clinical trials are almost always done by the company that is trying to get the drug on the market. But that doesn’t mean that the study is biased. It will have tons of oversight by outside observers that prevent bias. And there are any number of studies that are put out in this way that say the drug didn’t do anything. Agreeing to report any negative results is a requirement of getting aproved to conduct the trial.
Of course not.
Are you complaining that the OP used a comedically simple example with glaringly obvious flaws in logic to demonstrate how flaws in logic are missed? Because that’s not a strawman.
Or are you complaining that the OP chose God as the example? Because the point was not meant as an actual proof for or against God. That’s not a strawman, either.
I’m not DrDeth, but my complaint was that the OP’s hypothetical did not accurately represent the scientific method and if it had, his complaints (as I then understood them) would not apply.
OP has since clarified, in my eyes, that he is moreso criticizing low information consumers of scientific information than the scientists/experts and studies themselves.
~Max
Correct.
Right, I want addressing your posts. They are a separate issue.
Ok. I didn’t get that from your posts. You commented there was no such scientific study, which reads to me as taking issue with the topic, not the layout and missing structure. Sorry I misunderstood.
A lot of countries SUDDENLY find productx reeeeeaaal harmful if they’re at trade war with country Y which is known to export great quantities of product X, true story.
Just saying.
True story has real example?