Is Peer-Review Valid?

Well, the submitter receives the comments of the referees. If people were submitting papers to major journals and not receiving any review, word would get around. (Unless the journal is actually writing fake comments and pretending they’re from real referees – but why would they? That just seems like more work for the journal.)

Perhaps with these less-legitimate journals, the submitters know they aren’t getting reviewed, and don’t care.

It depends on the field and journal.

Not much of a hassle to hire a couple of minimum wage “reviewers” to “review” the various papers.

Must… write that down… done.

Thanks! Are the reviewers’ names released if the piece is published?

What kind of economics have you looked at? Economic policy decisions are indeed based on political issues, but the impact of making a policy decision should not be. In fact, a major weakness of micro is the assumption of homo economicus, who makes rational decisions at all times. Behavioral economics has demonstrated that no such thing exists, and to properly predict the impact of decisions you need to properly model how people will react to them. Yes, people get involved, but that introduces more and messier inputs, and does not reduce the mathematical rigor.
Like weather forecasting, economics forecasting is chaotic, only more so.

Only problem with that is that experts in the field will read the journal, note that many or most of the articles in it are crap, and refuse to send papers there or review for it. They’ll stop subscribing, get their library to not subscribe, and as word gets around everyone will learn that having a paper published there is approximately as useful as publishing it in stall 3 of the local restroom.

Journal quality is seen as proportional to the rejection rate. That discourages such nonsense.

Not for that piece, in my experience. Reviewers for a conference are listed in one block without saying who reviewed what. I’m not sure if all journals do that.

It’s sort of self-correcting. If it’s not happening properly, the quality of the papers will drop, and people will notice. If something fishy gets published in a supposedly high-quality journal, word gets around, and it can be an embarrassment to the journal. The problem is not with the actual good journals - they’re all doing it right. It’s the proliferation of these “journals” that wrap themselves up in the veneer of respectability to hide the fact that they’re garbage. People who work in the field generally know the difference, but the general public and especially the media can be fooled.

Consider the morons who recently claimed to have “published” their Bigfoot genome in a peer-reviewed journal. Which, it turned out, they had founded. And which had published exactly one paper.

Nope. It’s kept anonymous to encourage reviewers to comment honestly.

When I was in grad school, no they were not (it may be different for other fields).

Although, the idea that the reviewers are anonymous is really an ideal and not reality. In many fields, the number of qualified reviewers is so small, that you get to learn their writing style and the kind of suggestions they make. When I got reviews back for my first paper, the postdoc helping me out was able to identify all three reviewers in under a minute, between writing style, suggestions of works to cite, etc.

Seems like it might need to be made transparent to encourage reviewers to actually read things.

Without pushing this hijack too much further, that’s precisely the problem: you can have mathematical rigor like a skyscraper’s frame and still have zero objective value. There is, IMHO/IME, in too many fields, a sense that elaborate mathematical exercises transform subjectivity and vagueness into rigorous results. It’s GIGO, or more accurately FIFO (fuzziness in…) and no page of equations can change that.

“Jane is a woman. Women are brainless prats. Therefore Jane is a brainless prat.” - absolutely rigorous logical sequence, unchallengeable in terms of logic. But complete nonsense in every other way (starting with the fact that no one names their kids Jane any more). Economics sets its own precepts and rules, orders and analyzes selective data according to those rules, and - guess what! - has an internally rigorous framework! Yay!

I understand that in some fields the author is asked to recommend reviewers. Other reviewers are added by the editor. Then you’d have a clue. Such a system is not used in the journals and conferences I’m involved with.

Let’s forget economics and look at this. This is exactly a flaw in lots of work - I’ve seen it in engineering also. In fact I devoted some slides in my Engineering economics tutorial to this fallacy. I’ve seen people graph two very noisy curves, plot the intersection, and say that is exactly where you should decide the method is worthwhile or not worthwhile. Drives me nuts.

Years ago I saw an argument that successive generations of engineers were relying more and more on computer-based calculation and modeling, and thus losing touch with the “meat” of their specialty. That is, an old-school civil engineer could look at a blueprint and know in his gut that something was wrong, whereas a newer graduate would trust the automation and have no sense of the problem.

You can push that argument around a lot of ways, but I have enough old-school experience in a few things, and have worked with enough thick-knuckled old engineers and builders, AND with enough geek engineers, and have seen both miraculous cases of “Wait, something’s wrong” and smug “It’s all been analyzed” to believe the gist of it.

I can see that such thinking - and training that emphasizes only advanced tools - could lead to a misunderstanding of how precise calculations map to a real world solution.

That’s supposed to be part of the job of the editorial board - keep track of “good” reviewers and “bad”. But really, this whole thing is a massive volunteer effort. People chip in because it’s expected as part of being a useful member of the scientific community. It’s part of the culture. I don’t think that trying to impose massive oversight, sets of rules, rankings, and box-checking is ever going to work. It’s not a perfect system by any means, but despite stories like the ones that started this thread, it’s pretty darned good.

You don’t get paid for peer review?

We wish!

Heh. Good one.

Reviews in my wife’s field are anonymous. However, after awhile you can tell who is reviewing you by the comments.