Is Peer-Review Valid?

Or you think you do. As I mentioned, some advocate that the papers have authors removed. A common response is that this is useless since most reviewers will figure out who the author is anyway. (Much more so than in the case of reviews.) The claim was that when this was measured reviewers were often wrong about authorship.
I’ve read lots of reviews with names attached. They are mostly too short for stylistic quirks to show up, and of course do not mention the specialty of the reviewer. Supposedly if the reviewer demands that a certain paper be added to the references it means that he is the author of that paper, but I often recommend obscure papers by other people.
A reviewer can give enough information to give herself away, but that is not common.

Anecdotally, the joke is that most reviewers give themselves away by insisting that several of their own papers be cited by the paper under review.

True. To go further into detail - when there is a pissing match among academics around certain theories (or pet theories), you can usually figure out who did the review based on how they bring certain things up in the comments. This is especially true the more narrow the field is.

That is a real problem with peer review. An editor can give the paper to reviewers on the side of the author, and get high scores. Or the reviewer can give the paper to reviewers on the other side, and get low scores. Or the editor can give it to both sides, and get a binomial distribution of scores.
While there should be two levels of review of an accept/reject decision, the editor has a lot of power. I told one author to ignore a certain review, when it was clear that the reviewer had no clue as to what the paper was about (to my surprise.)

There is often a category called Interest, about how interesting the topic is. If you give the paper to people in that little specialty, you get high marks on this even if they hate the paper in other ways.

I will leave the actual certification and audit standards to the fields. But this is hardly inventing a new wheel, but modifying it to fit a particular vehicle. (Side note - I wish something similar existed for ‘journalism’ sites as well as academic journals

They are not currently subjected to the process, but there is no reason they could not be. I think a related issue is who is the audience? Other researchers, or layman or new students. My perspective is as a layman. I can choose to read one good book on a topic that presents the issues clearly, or twenty journal articles that deal with one issue in depth, but not in connection with other issues. I prefer a more holistic view when approaching a subject. On some topics, I will read the twenty papers, but not for most. It should not be a duality, but the process right now only endorses one side - and I don’t think the process is generating the quality participants think that it does - hence this thread in the first place. I think the whole publish or perish phenomena is denying accessibility - too many papers for new grads to get up to speed, and discouraging too many from seeking academic positions. I’m tempted to put myself in that camp. I may pursue a Master’s, but no desire for a PhD.

Again, I bring an outsider’s perspective. I have done enough vendor selection or program development to understand review processes, though not in this particular area. The certification process is not validation for insiders, but for the rest of us. It would be helpful so when a person decides to broadcast something on social media, others can determine its worth. Anything that would make it easier to debunk garbage science should be encouraged.

[QUOTE=Smeghead]
It’s sort of self-correcting. If it’s not happening properly, the quality of the papers will drop, and people will notice. If something fishy gets published in a supposedly high-quality journal, word gets around, and it can be an embarrassment to the journal. The problem is not with the actual good journals - they’re all doing it right. It’s the proliferation of these “journals” that wrap themselves up in the veneer of respectability to hide the fact that they’re garbage. People who work in the field generally know the difference, but the general public and especially the media can be fooled.
[/QUOTE]
Precisely. It might be self-correcting for insiders, but not the general public. The signal to noise ratio is horrible and only getting worse, especially as research expands from the European/US core, and gets increasingly specialized.

It might be time to move past using volunteers and establish a more professional review process. Until very recently, the scientific community was small because global population was small also. As that has exploded, so has science, and instead of a few thousand practitioners, disciplines can now have millions. The current system worked fine with the smaller population, but is breaking down under the strain of the larger population.

It’s been widely pointed out that the study / prank in the OP had a fatal flaw - no control experiment. The guy should have submitted his manuscript to Nature - popcorn opportunity of a lifetime if they had even sent it out for review.

Do feel he’s done good work, though, at shining a torch on these parasites at the bottom level of open-access publishing.

Assuming the journal even cares about embarrassment…

Case in point, about a year ago I helped review a paper that was absolute shit. Their methods were wrong in ways that still make my head spin. The single positive “result” came from some sloppy images, squinting, and wishful thinking. If those were my results I would be ashamed to put them on a poster or department presentation, let alone a publication.

I have pubmed alerts on the general topic of my research, which picked up the exact same paper, without any revisions, published in an obscure medical journal that I’ve never heard of. My most charitable possible assumption is that the journal editors know absolutely nothing about the basic science they publish. It’s more likely they’re just another scam operation.

Perhaps more importantly, these bogus publications can fool plenty of government bureaucrats, deans of minor academic institutions, or hiring managers in industry.

Actually that is pretty much the opposite of a fact. This very rarely happens. It does nothing for anybody’s career (or their chance of holding onto one) to replicate a result that someone else has already published, and if you fail to replicate it, well, negative results hardly ever get published.

Yep. Just try to submit a grant proposal to get money for experiment replication and see how far you get.

But it’s not as hopeless as that suggests. “Replication” sort of does happen in the sense that people build off of one another’s work. You might publish that protein X is involved in process Y, and that might prompt me to look at X in the context of Z, in such a way that if you were mistaken, my experiments won’t work at all, and I’ll be really pissed at you. So important results are verified because they get built on, and those results that are ignored are, well, ignored, so it doesn’t really matter if they’re right or not.

We do have certification. It is called getting a PhD and in many fields doing a postdoc at a good school.

I review technical books for an IEEE magazine. Books are great in bringing people up to speed on an area, but terrible in advancing the field. Their publication cycle is too long, the required audience to make them financially feasible is too big for specialized research results, and the effort required to write them is too great. The stuff I review requires a good background, but not the kind needed for a transactions paper. People like you are served by popular books, which are important but which again don’t advance the field.
In the latest IASFM Robert Silverberg quotes Isaac Asimov form 1955 complaining that there are too many papers in biochemistry to keep up with. Keeping up is hard, which is why a reviewer’s job is to recommend papers that others might have missed. In my field there is a 10 - 15 year time window past which papers disappear. One of the things I do, being old, is tell authors that they should look at a 20 year old paper which did what they are doing. Has happened more than once.

I’ve done vendor selection also, and vendor selection and grant reviewing are not even remotely comparable. Vendor selection involves how a vendor meets a set of requirements, today. Grant reviewing involves evaluating whether the subject is worthwhile, and the probability that the applicant can get there. Think of what you’d say if a vendor said “pay us now, and we probably will have the capability for you in two years.”
Social media is not the place where results should get broadcast. Neither are press conferences. Maybe after paper acceptance, but not before. (That was a sign something was fishy with cold fusion.) Journalists are the conduit between science and the general public. And it is a very tough job. They have to both understand a very wide range of technical areas and make it understandable. When I’ve been interviewed by our trade press I’ve prepared by writing sound bites in advance at the level the reporter works at.

My conference gets 250 papers, which each get 5 reviews, and the average reviewer does no more than 3 reviews. That is about 400 reviewers. Sorry, we can’t afford to pay them any amount of money which would make a difference. I get a shirt for reviewing at one big formerly rich conference, but that’s an exception. While there are some for-profit journals, most in my field are associated with non-profit organizations, which doesn’t make any money on the journals.
A layman who wants to evaluate findings is going to have to put some effort into it.

A friend of mine, when he was doing his dissertation in Physics, discovered that a well accepted result was wrong. I don’t know if he got a paper out of it, he definitely didn’t get a PhD, since it held him up for a year.

Anything important will be replicated. 90% of the stuff isn’t important. (If not more.) Kind of like a scientific Sturgeon’s Law.

I cannot imagine how a paid, dedicated review system would work. The people doing the reviews right now are the professionals. If they stopped doing that work and instead became permanent reviewers, they would, over time, lose the ability to review the work.

Also, the expansion of journals and other means of disseminating ones work would argue against the ability to fund enough professional reviewers to cover current and future needs.

Peer review works pretty well. I haven’t really heard any strong arguments for undoing it in this thread.

On the issue of blind reviewing, there are times where both the reviewer and the authors are blind. Of course, just like reviewers can undo the blind in their comments and suggestions, there are details about particular authors’ work that can give them away no matter how much effort is given to removing identifying information.

I prefer blind review. I wouldn’t want to have to think at all about the politics of my review.

Also, I know that there are lots of people who insist they can discern who a reviewer was. I have a colleague who does that all the time. Most of the time, I suspect it’s bullshit, and if things are truly blinded, you can’t ever confirm your suspicions.

My book reviews are not blind, and I usually know the authors. You can be sure I think about this during the review process. The books have been reasonably good, but if I ever got a stinker I might well just not review it. While paper reviewers can just decline a review, books are out already so your review is not going to change much, while a paper with bad reviews eliminated might get published when it shouldn’t.

Well, yes, provided that one understands that “replication” means what Smeghead said -

  • and almost never means doing the original experiment (or even some more rigorous, but otherwise similar. version of it) over again, as DrCube implied (and which is probably what most non-scientists think “replication” means).

Even so - I stand by the contention that 90% of published work will never be followed up on, because it is too specialized or too uninteresting.
As for replication in the sense of actually repeating it to repeat it, it would be way less than 1%.

Quite possibly. If/when my PhD work finally gets published, I expect it will prompt mild interest in maybe a half-dozen people, after which it will promptly be ignored.

The whole subfield I did mine in crashed and burned. (I have an alibi!)

What the article fails to mention is that exactly zero “real scientists” will take something published in the Journal of Natural Pharmaceuticals seriously.

The analogy to newspapers is a good one: no one thinks investigative journalism is completely bogus just because the Weekly World News will publish stories that are obviously absurd. Any field has journals that are the equivalent of The Times or the Wall Street Journal, a few that are more like The Daily Mail or National Enquirer, and some (like the journal in question) like the Weekly World News. A scientist in that field will be able to distinguish a legit journal from a BS one as easily as an average person can determine the relative journalistic integrity of a newspaper.

Thank you, and everyone else who’s offered their opinion. I guess my confusion here has largely to do with the reliability of peer review. I understand that my friend, who seems to think that peer review is worse than professional opinion, is totally wrong, but I don’t quite understand what to think about it. Like, I hold that if a scientific study is published in a reasonably reputable journal and I can’t find anything denouncing it as crap, then its conclusions can be used as a reasonable source. Am I stretching too far?

You’ll probably be fine, as long as you know what you’re looking at and how it fits in with the field at large. It’s actually pretty rare for a single paper to conclusively prove some large point. As an example, nutrient runoff from agricultural fields has caused significant eutrophication in estuaries on the east coast of the US. This is about as sound a fact as you can have in ecology, it’s a given for any coastal or estuarine ecologist or oceanographer, yet finding a single cite for that sentence can be difficult, because it isn’t based on one single study but rather the synthesis of thousands of different studies. Real science isn’t so much “Eureka!” as it is a slow, methodical grind.