The article below is a lengthy explanation of some key types of scientific papers, particularly in the health field, and an attempt to rank them by their scientific value.
You’ve covered the most common types of studies and their associated papers or reports. There are also conference papers, which are often precursors to being published in formal journals, and often lack the completeness or thorough peer review of journal publications.
One additional type of study is an adjunct study, where a particular assumption or component of a study is explored in more depth or length in order to validate or challenge the parent study. Adjunct studies can often grow to be original work in and of themselves even though the original purpose was just to support the parent study. Such studies are highly valuable in fleshing out a purported discovery or concept into a practicable application, or conversely, showing that even if the concept is valid it is practically unworkable with current capabilities because of material, economic, or other considerations.
For an example, I worked on adjunct studies for the NASA Design Reference Architecture (DRA) 3.0 and 4.0 to look at the space transportation infrastructure assumptions such as consumable, habitat, and propulsion mass and volume in those DRAs. The conclusions of those adjunct studies fed into the successive DRAs in terms of the required Earth-to-orbit architecture and mass/bulk estimates to come to a more practicable reference architecture and better grounded cost estimates. Those studies also indirectly led to another, wholly independent study on a solar orbitIng communications infrastructure to support crewed missions beyond Earth orbit, which is a problem that neither NASA or any other space program or proposed exploration effort has adequately considered or addressed.
It should be appreciated that a lot of papers are not written to inform the community about some great new innovation but instead are published to satisfy PhD program requirements or the “publish or perish” mentality of modern academic research. As a result, there has been a glut of papers over the last twenty or more years that are essentially a rehash of old ideas dressed up in new verbiage, or survey papers that collect together information that can already be found in reference texts. Even when papers cover original research they often provide interim or inconclusive results that are not actually valuable, or have such a narrow focus as to be essentially useless to anyone not trying to replicate the same study.
Reading papers from three or more decades ago compared to more recent papers is illuminating in that older papers are shorter, contain far less flowery verbiage or obscure jargon, and usually get directly to the point versus the papers now found in an expanded publishing market, particularly the “pay to publish” journals that often do very little diligence or review to assure that the content of the paper is plausible and valuable to the intended audience. And now that generative AI is being used to ‘supplement’ the writing of authors, there is even more garbage papers and even fabricated data (or entire studies), which has always been a problem but is becoming both more frequent and more apparent as the tools to uncover fraud have improved.
When you move from scientific research and academia to public health policy [or whatever] then your standard is having it evidence-based. That is where someone needs to look at the sum of knowledge to decide what is the best vaccination strategy, the most balanced diet, the ways that we reduce skin cancers in kids and so on. The evidence is probably a huge amount of disparate information, and public policy relies a lot on reviews of different kinds to machete its way through the forest of info and land on what are reliable and sound findings at their core.
One particular category in the medical field is the Cochrane study. The Cochrane study is a particular type of review of previous studies, aiming to pick up where the evidence is actually strong enough to base solid statements of fact upon. Why is this useful? Because many published studies are under-resourced, don’t have enough participants to really say that Treatment X is doing Y, or have other fundamental flaws that mean that their findings should be treated with caution or do not have the claimed level of benefit.
Predatory journals publishing essentially garbage or even completely fake studies are a risk here, because they get vacuumed up in the general assessments of the state of knowledge.
Other disciplines do Cochrane type studies - there’s not something magic about their methodology - but the types of data probably suit number-crunching better than some.
There are also abstracts that are published to publicize a forthcoming longer paper and establish priority. I have one result that first appeared in a one page announcement in the Comptes Revues of the French Academy of Sciences, then came out in the Proceedings of an International Congress of Mathematics as an invited address and finally with all the details as part of a book, but it could have been a long journal paper.
Here’s an interesting article about how a letter to the editor published in the New England Journal of Medicine helped fuel the opioid crisis (through no fault of its author). Pretty egregious.
Because it was published in the most prestigious U.S. medical journal, its influence snowballed in a dangerous way.
In the 1980s, Hershel Jick, a doctor at Boston University Medical Center, had a database of hospital records that he used to monitor side effects from drugs. Journalist Sam Quinones tells the story in his book, Dreamland: The True Tale of America’s Opiate Epidemic. Something, perhaps a newspaper article, got Jick interested in looking at addiction. So he asked a graduate student, Jane Porter, to help calculate how many patients in the database got addicted after being treated with pain medicines, and dashed off a letter to the New England Journal of Medicine. Its brevity was commensurate with the effort involved. Here it is in full:
Recently, we examined our current files to determine the incidence of narcotic addiction in 39,946 hospitalized medical patients who were monitored consecutively. Although there were 11,882 patients who received at least one narcotic preparation, there were only four cases of reasonably well documented addiction in patients who had no history of addiction. The addiction was considered major in only one instance. The drugs implicated were meperidine in two patients, Percodan in one, and hydromorphone in one. We conclude that despite widespread use of narcotic drugs in hospitals, the development of addiction is rare in medical patients with no history of addiction.
Purdue Pharma, which makes OxyContin, starting using the letter’s data to say that less than one percent of patients treated with opioids became addicted. Pain specialists routinely cited it in their lectures. Porter and Jick’s letter is not the only study whose findings on opioid addiction became taken out of context, but it was one of the most prominent. Jick recently told the AP, “I’m essentially mortified that that letter to the editor was used as an excuse to do what these drug companies did.”
If you don’t track down the original letter, it’s not obvious how brief and narrow its findings truly are. (Until NEJM put its full archives online in 2010, the only way to track it down was to find a physical copy in an academic library.) And it seems like many people didn’t. As Quinones writes in his book, the brief letter became a lot grander in retellings:
One researcher, writing in 1990 in Scientific American, called Porter and Jick an “extensive study.” A paper for the Institute for Clinical Systems Improvement called Porter and Jick “a landmark report.” Then the final anointing: Time magazine in 2001 story titled “Less Pain, More Gain,” called Porter and Jick a “landmark study” showing that the “exaggerated fear that patients would become addicted” to opiates was “basically unwarranted.”
There is a perhaps understandable source of confusion. Scientific journals pride themselves on peer review—where outside experts evaluate research—but the Correspondence section that published Porter and Jick’s letter doesn’t usually follow the same standard. It makes some sense. The Correspondence section usually consists of short letters to the editor or somewhat informal observations from doctors and scientists. Porter and Jick’s letter appeared alongside others like, “Bacteriuria in Schoolgirls” and “Problems with Peppermint-Flavored Lidocaine.”
There’s a scene in the show Dopesick of the federal prosecutors trying to find the “research study” in the hardcopy NEJM issues spanning the time frame and failing before realizing it’s a letter to the editor.
There are a couple of other types. One is a survey paper, which looks at the state of the art in a field, not only listing and summarizing papers there but putting them in some kind of context.
Major journals have commentary on what they consider to be significant papers being published. I’m not sure if they count as a paper, but my daughter did one for Science, and it got reviewed very thoroughly.
But more important is the impact factor (how many papers have referenced a paper) and where the paper was published. There is a hierarchy of journals in all fields, with lower ranked journals not getting referenced as often or usually doing as thorough a job in peer review. There is a suspicion that a paper in one of these might have been rejected from a better journal.
As mentioned, conference papers rank below journal papers. Conference papers get one review cycle, and in my experience no one checks that the requested changes to accepted papers were made. (No time.) Papers submitted to journals might go through several review cycles until all the reviewers are satisfied. My field also has workshops, for first thoughts, where the reviewing is fairly superficial and done by the program committee. Those don’t even usually count as publications, and can be submitted to a conference or journal with minimal changes.
I appreciate the reply, but is that different than the omnibus example I used in the OP?
Like for me, I sometimes like to research medical conditions and I will find papers that look at 10-20 different intervention ideas into a disease.
Are what you describe here something different? I get the impression the omnibus papers I’m talking about are using all evidence, old and new. But yours may only involve the most cutting edge research. Since your area of expertise is in technology, I seriously doubt anyone cares what worked in 1980, while in fields like medicine, what works in 1980 will still work in 2024.
In machine learning, by far the most important papers are preprints, which are theoretically supposed to be preliminary results that will get fleshed out now in the future in a conference or journal paper. That still happens for some, but if the conference is in 8 months there already would have been like 5 follow-ups on the first paper. The field just moves too fast.
It might be useful to think of a continuum rather than discrete kinds or types of paper, although what you call something will have some effect on where and how it may be published and considered in future.
Regardless of discipline, there is a more-or-less inverse relationship between time to prepare and review and the inferred quality. A paper that gets peer review and multiple rewrites and rejections should have fewer flaws than [say] a letter on opiod addiction. The time required for depth and quality assurance means there is a trade-off with timeliness and newness being needed more in some fields and situations. The literal explosion of covid studies in 2020-23 is a clear example. Much of that research was issued as pre-prints and arXiv type articles, and never went further, or may contain one useful idea buried in a sea of words. The timeliness is offset by the often brain-fart, unthought-out, methodological sketchiness of the product.
Disciplines will also refer differently to how single focus in-depth case studies relate to broader, multi-strand surveys or data trawls. I’d assume in most disciplines the two types of study work in tandem and researchers mentally shuttle between the particular and general all the time. In my field single site studies in depth provide data against which theories and broad-brush explanatory frameworks need to be tested, and conversely the single site is understood in the context of the bigger picture and contributes to our desire to better explain reality.
I took the omnibus paper to be considering the different ways of addressing a particular problem, while a survey paper is covering the full extent of work in a field, not necessarily tied to a particular problem, and often about several related problems.
Here’s an example. When I was in grad school I worked on microprogramming, which is implementing the instruction set of a computer by writing code with instructions at a lower level. I was involved in two survey type articles. One was looking at the particular problem of making microprograms smaller. The other was a survey of papers on the general field of tools for microcode generation.
It’s not surprising that most papers in medicine are of the type you describe, since the field is too big for a survey paper of my second type. Back 50 years ago people did surveys of software engineering, but that’s impossible today.