What is a peer review journal?

I have to read and review several peer review journals. I’ve located them on the web and read each of them. What I am wondering is what exactly is a peer review journal? I thought that they were scholarly articles that were sent to other peers and reviewed from there. Upon review they were submitted and published. If the peers didn’t agree with the content then the journals, they were sent back to the author with notes on needed revisions. Am I right, or am I way off base?

That’s pretty close. Prepare to be bored when you read this stuff!

The procedure is usually something like this. An author submits a paper to the journal, where it is assigned to one of a number of editors. The editor then sends the paper to several reviewers ( 3 is minimal, five is good, more wastes reviewer time) along with a form to fill out evaluating the paper. Reviewers are chosen from a pool based on knowledge of the subject matter while avoiding conflicts of interest. The forms typically have a numeric scale rating originality, technical correctness, readability, and several other things. There is also room for comments, both to the author and private comments to the editor. There is typically a pass/fail recommendation, and a scale showing how strongly you feel about the review. (Sometimes you get papers in areas you are not as comfortable in as others.) The editor gets the reviews back (usually electronically these days) and makes a decision.

Now, for time critical things such as conferences or special issues, that’s it. For less time critical journals the editor can reject outright, accept as is (with minor changes) or send the paper back to the author for a rewrite, based on the reviewers’ comments. The reviewers then get the revised paper back, along with all reviewer comments, and get to re-review it, to see if they are satisfied.

I won’t go over the quirks in the system, unless someone is interested, but it works pretty well. I’ve done way too many reviews in my life, and I have sent out and read even more.

See also http://en.wikipedia.org/wiki/Peer_review. Voyager, what field are you in where three reviewers is minimal and five is considered good? In biology, two is the most common number, with three next in line and five unheard of.

In Comp. Sci. three reviewers is the norm, with the editor hoping to get 2 back.

While Voyager explains how it is supposed to work, in Real Life the editor can basically rubber stamp or kill the paper by sending it to certain reviewers. So in some areas (cough AI cough), buddies send their papers to buddy editors who send the papers to buddy reviewers and everyone gets a huge number of worthless entries on their vitas. If you make the mistake of sending to an editor that hates your area/advisor/buddies, then you get the Referee Reports From Hell.

There’s a lot of Bad Stuff than can happen in the process. I have personally written referee reports where I cite an earlier, better result and the paper is still accepted without change. No cite added, nothing. The best I can then do is just not referee for that editor again. It also appears to be impossible to stop people who have made a career out of stealing other people’s work.

I don’t think there’s much more to it than, well, a journal whose articles are reviewed by the authors’ professional peers before acceptance and publication. How exactly that process works depends on the journal (PNAS has a somewhat unique system of sponsorship, for instance), but typically it’s just as you describe: One submits, the paper gets sent out to reviewers (usually three or so) they either accept without revision, request revision before acceptance, or reject.

I have been in computer architecture and now I’m in design and engineering. For the conference I’m running this year we require five returned reviews for each paper - and I bugged the people on the program committee until I got them. Oddly enough the archival journals get less. I’ve never seen a paper with only two.

But we don’t churn out papers with quite the frequency of biologists (my wife got her graduate degree in biology) which might be the reason. A lot of the papers we get are from industry, where publishing is not quite so important.

Maybe your real life, not mine. I’m sure that this happens, but I’ve never experienced it. I have seen conflict of interest situations, but they’re rare. However I’ve seen dumb reviewers far more often than evil editors, both liking and hating papers. I’ve seen reviewers so off-base that I wrote that the author could ignore that review. I’ve also seen senior reviewers miss flaws. That’s why I like more reviewers - not everyone reviews with the same level of intensity, especially under time constraints.

In your case, I trust the author got your comments, because I’ve never heard of editors not sending reviews to the authors. But if the field is AI, I believe anything. It’s astonishing how little real progress has been made in the 32 years since I took AI that can’t be explained by faster computers and better algorithms. What is the latest fad, anyway? I’ve been through planner, frames, neural nets, expert systems, AI hardware, until I lost track.

Theres also been a couple of rather dubious cases where reviewers have been known to denounce a work strongly to the editor so as to not get it published and then turn around 6 months later and propose essentially the same idea. There have been quite a few rifts formed that way.

A lot of the work in AI has been broadening rather than deepening. AI now covers fields such as learning, vision, robotics, control, text classification, cognitive psych etc. The basic toolbox is still there but it’s being applied to a wider range of areas.

Just to clear something up, in my previous post only 2 sentences were referring to AI. (But the same thing happens in a lot of other areas, e.g., “experimental” CS fields.) After the paragraph break, well, it’s a paragraph break, ergo new thought.

My own area is generally part of Theory but with heavy overlap in Systems.

  1. Conference reviewing is generally much different then journal reviewing. Far less time is done reading the paper, the reviews are much shorter and less detailed. And most importantly it’s just a go/no go decision. No opportunity to revise and resubmit. Authors are completely free to ignore suggestions from reviewers.

  2. Conference program commitees in CS are the strangest thing imaginable. Absolutely without Scientific standards of any sort. Terrible papers are accepted and great ones rejected, almost always based on the fame of the authors.

I saw a lot of bad things happen during the last program comm. meeting I was at. To give a not-so-evil example: the paper that cited my work far more than any other was not assigned to me to review. I had only a small opportunity to interject a few basic comments about it. Fortunately it was rejected but later got accepted at another conference (without any of its problems fixed). And there are about a dozen worse stories than that. (The Best Student Paper award went to a paper where the main formula had units that made no sense. I.e., similar to expressing speed in oranges per kilogram. It should have been killed instead.)

  1. I regularly find serious errors in papers that the other referees miss. (I usually get a copy of the the other reviews+letter from editor to authors.) So I have single-handly saved several people from embarassment. Never a thanks of course. One editor was handling his own paper. (So anyone still believe that conflict of interests don’t happen?) He sent it to me and insisted on a rush job. It had an obvious fatal flaw. Took some work to convince him how wrong it was.

That is a problem for conference papers. I’ve tried to get my program committee to get a draft of the revised paper, and check against reviewer comments, but I know it won’t be universally done. of course authors can never respond to all reviewer comments - isn’t it great when you get two absolutely contradictory comments? Then you can do whatever you want.

We’ve had debates about blind reviewing, where the author’s name is removed. Of course most good reviewers can figure it out. We have some papers from more working level industry people, and take summaries as well as full papers (with the warning that the summary has a much greater chance of being rejected.) Reviewing a summary is a matter of faith, and it is thought that knowing who you are believing in is helpful. My area was more academic, and I always got full papers.

I know that the issue of famous authors getting accepted unfairly always comes up, but I know from the statistics that a lot of famous authors get rejected too - and I’ve had fights with some about it.

Does your program committee do the reviews? I’ve been on some that work that way, but usually only for workshops or conferences that take abstracts, not full papers. Five reviewers help with this, since even if I wanted to be biassed I don’t think I know five reviewers for a paper with an agenda. We also, at the PC meeting where papers get accepted or rejected, go over each one using a scoring system, and look for outliers - papers with high marks that have been rejected, or papers with low marks that have been accepted. These are okay, but the PC member owning the paper has to explain it to 40 of his or her peers, many of whom are experts.

[/quote]

Well, unless they pay us to do reviews, that’s going to happen. When I was in grad school a paper was published in IEEE Transactions on Computers “proving” that a linear time algorithm gave optimal results for an NP hard problem. The letters flew after that, of course. The odd thing was that while writing a survey paper on this area I had a hard time not finding a counterexample to optimatlity for small cases.

And laziness. Put me in that crowd. I pretty much summarily avoid reviewing now, because I simply hate doing it. Anyway, it’s not like it’s some special distinction to be asked to review articles anymore, because the volume of submissions is so enormous in cellular and molecular biology. Pretty much all it takes is getting a couple papers accepted to a particlar journal.

I made the mistake of reviewing a couple papers very carefully and doing (what I thought was, anyway) a good job. I accepted one with revision, and rejected the other. My comments were in line with the other reviewers. Boy, were the floodgates opened then. I almost wonder, in retrospect, if writing a couple crap reviews would have given me more immediate relief, but in the end I just had to resort to saying “NO” all the time and begging off due to “massive workload” (which was a load of something else, but like I said, I’m lazy). Reviewing gets real old, real fast, and I have to guess that some “dumb” reviewers simply didn’t read the article carefully. That’s no excuse, of course; if you don’t want to do the job well, don’t do it, I say. And that has been exactly my approach.

Isn’t it “peer reviewed journal”? Or maybe “peer-reviewed”?

Yes. I’d go with the hyphenated form.

For our field*, three is more of a max than a norm. Two is sufficient, and in some cases one is used. We make every effort to find three or more reviewers who can look at the manuscript within a specific time frame; often, they cannot (or the paper is not within their area of expertise).

Our topical editors reject something like a third of the submissions. Sometimes, that seems like a low number.

The author is blinded to the reviewers’ names so that, in theory, an author’s pals can’t just give him a big vote of confidence. However, scientists gossip as much as the rest of us. Even so, accusations of plagiarism and other corruptions are pretty rare.

*I won’t say what the field is, precisely, but once upon a time we nearly published an article CalMeacham had authored. :smiley:

I’ve done reviews, and also published papers. In my area (law), the journals I’m familiar with do double-blind reviews: reviewers don’t know who the author is, author gets anonymous comments back from the reviewers. Only the editors know who the author is and who the reviewers are. Seems to work pretty well.