How do you know if something's peer reviewed?

I’ve been doing research on environmental problems in Taiwan, and I’ve been looking at a lot of journal articles, but it just dawned on me that I don’t know if something is necessarily peer reviewed. If something is in a journal (like, say, the Journal of Asian Earth Sciences) does that automatically mean it is peer reviewed before being published?

If it’s in a peer-reviewed journal, then it’s peer-reviewed. Not all journals are. And even if it is peer-reviewed, that still doesn’t necessarily mean that it’s reputable, since for some “journals” the peers reviewing it are just a bunch more crackpots or shills.

The best way to tell if it’s reputable is to have a few reference journals that you already know are reputable, take a look at a few papers in it, and see what journals they reference. As a general rule, papers in reputable journals will only reference other reputable journals. If you don’t even know enough to do that, then your best bet would be to find someone in an appropriate field at a university, and ask them.

On the first page of the article there should appear a date for submission and acceptance. If they are the same day or within a couple of days of each other (cf Medical Hypothesis), there was no peer review. If the dates are a couple of months apart, then the submission went through some sort of review.

Look up the journal on-line and look for a list of reviewers. Legitimate journals that publish high-quality work will not only list the reviewers, but also the editors and board members (cf Clin Chem (AACC), Virology (ASM), Amer J Path (ASCP), PNAS). Beware of journals where the editors/board members are also the only ones who submit articles: they might go through a review process, but it isn’t one that would reject bad science or results. Also pay attention to the Impact Factor. The higher the number, generally the better and more authoritative the journal is. I once ran across a journal with “an estimated IF of 2.2.” It was a bad science/pseudoscience journal that published articles that no one else would.


sorry to disagree with Vlad, but no journal I am aware of publishes a list of reviewers. Maybe it is different in his field. But the list of editors will tell you a lot and also they should say that articles will be reviewed, although the reviewing is not always serious. I am aware of one journal published in France, where all the articles are reviewed by the editor and his students and most, if not all, the authors are likewise.

No list of reviewers for any chemistry journals that I’m aware of. The editors email the submitted papers to reviewers, typically faculty at other institutions who have some knowledge of the chemistry the paper is about. Note that I always recommend a list of potential reviewers when I submit (the journals ask for this). I also request that a couple of folks NOT see any of my unpublished work. The editors are free to ignore my requests.

I just looked over the ACS web site and I don’t see any information on how the review process works. There aren’t any explicit “This journal is peer reviewed” statements either.

Gitfiddle, do you have access to the Science Citation Index or other academic database service? These are good starting points for research, simply because everything in them has already been vetted as a serious academic journal.

I think it is fair to say that, although there are exceptions, the vast majority of research articles in academic journals have been peer reviewed. The exceptions are mostly a few bottom feeding journals (that rely only on editorial review), and some instances where authors have been invited by the editor to contribute, rather than having submitted on their own initiative. It is often made explicit, though, when something is an invited paper, and even if it is not, in many cases it will have been peer reviewed anyway, and you can be pretty confident that the author would not have been invited unless they were a respected expert.

Impact factors can be useful to get a rough sense of the hierarchy of journals within a particular field, but they should be treated with great caution. They are very much affected by the idiosyncratic cultures within different disciplines (in some fields, or sub-fields, the tradition encourages the cramming in of as many citations as possible, in others, not so much) and by the sheer size of the field. General biomedical science journals tend to have very high impact factors, mainly because there are so many biomedical scientists. For instance, the Annual review of Immunology has an IF of over 50, whereas the highest IF physics journal, Reviews of Modern Physics is only at about 28. Physical Review Letters, another top physics journal, has an IF of just over 7. There just are not so many physicists as there are biomedical researchers (and, probably, physicists do not cite so heavily). For the same reason, more specialized journals tend to have lower impact factors than more wide ranging ones, but may still be publishing excellent and important work. In psychology, although the top journal, Behavioral and Brain Sciences, has an IF of a bit over 17 (partly because it also covers aspects of neuroscience), some other very well respected journals have IFs of only around 2 or 3. Then again, the editors of many of the best journals in humanities disciplines would probably kill for an IF of 2. (Or maybe not: people in the humanities still know which the best journals are in their particular field, regardless of miniscule IFs.)

Ultimately, the only way to know the good journals from the not so good is to have an intimate knowledge of the specific field (unfortunately, I don’t have any specific knowledge of how things go in Earth Sciences), and of the scuttlebutt as well as the formally published stuff. Even then, low rated journals will sometimes publish excellent papers that give you just the information you need, and the top journals will quite often publish clunkers. You have to use your judgement.

That is absolutely not the case for medical journals.

Here is a list of reviewers in 2009 for the prestigious JAMA.

And here’s the list from the New England Journal of Medicine.

In fact, to the best of my recollection, all medical journals have a similar policy (for explicitly identifying and thanking peer reviewers).

And, with respect to determining quickly if a (medical) journal is peer-reviewed, the ‘information page’ of most medical journals (whether on-line or paper version) often has a sentence or two describing the journal. Usually, you’ll find there something along the lines of, “XXX is a peer-reviewed journal published monthly by the . . .”

Impact factors are highly misleading. An IF of 2 in theoretical computer science is very good. For reference, Information and Computation has a 5-year IF of 1.6, whilst Theoretical Computer Science has a 5-year IF of only 0.995. Both of these are very highly respected journals.

I want to state unequivocally that I don’t believe this to be the case, but how does one counter the criticism that a network of respected journals amount to nothing more than a mutual admiration society?


Who’s making that claim? It pretty much has to come from those who know nothing about science or the peer review process. Your “peers” who are doing the reviewing are often your rivals, and even leading researchers can get articles rejected, or have to make significant revisions.

I think it is hard to convince these people that it isn’t the case. They seem to be the product of “The Republican War on Science”

In biomedical sciences we try and prove that A is related to B, but not C. We do experiments that show if you alter A, B is altered as well and C is not altered. We need to come at A, B and C from multiple angles, not just one way. The science should be concise and logical. Someone who knows about A, B and/or C will be consulted to make sure that the original researcher’s conclusions and science are sound. If the reviewers have any questions, they are forwarded to the original researcher and the original researcher responds. Sometimes that means including more experiments or just clarifying data.

Not everything that gets submitted will get published. The more respected the journal, the more ground-breaking or novel the paper has to be. More mundane (yet still very important) research is published in lower tier journals.

Quoth njitt:

I was about to object to this, until I realized the cause of my objection, which just muddies the waters even further. One could argue that the top physics journal is Physical Review, but one could also argue that it’s six journals, not one: PRL is split off from the rest mostly by virtue of length and timeliness of the papers in it, and the rest is split int Physical Review A, B, C, D, and E by subject matter. They’re all under the same editorship, and a submitted paper can be transferred from one to another internally, so in that sense they’re all the same journal, but on the other hand, a relativist who publishes mostly in Phys Rev D is unlikely to read anything in Phys Rev A, so in that sense, they’re different journals. Which way does it make more sense to count them for purposes of calculating the impact factor?

Nobody in particular. To make the claim would be basically to argue that there was a conspiracy amongst researchers clinging to their religion as it were and rejecting the so-called alternate views of the claimants. I guess this is a pointless question because anyone who would make such a claim is likely to be immune to logic.


To a certain extent that is true - but not out of malice, but because those working in the field have seen a lot of evidence for the standard position, and need lots of good evidence to make them accept an alternate one. That’s in areas considered settled - those still in dispute have lots of alternate views published. In most cases those who think they have alternate solutions have weaker evidence than they realize, are unaware of research falsifying their position, and are sometimes deluding themselves. A truly new and well supported concept is exciting.

Journals I work with don’t publish reviewers, but peer reviewed conferences do. One good way of seeing if a paper is peer-reviewed is to look at the submission guidelines for the journal where it was published. This will usually spell out the review process. For IEEE transactions have the dates of review and revision, as mentioned above, but magazines, which also contain peer reviewed articles, do not.

One should be careful to distinguish short communications, letters, features, and editorials from actual papers.

Well, that just reinforces my point that Impact Factors are not a reliable guide to quality. There is no sensible perspective from which Annual review of Immunology is nearly twice as “good” a journal as Reviews of Modern Physics, or seven times better than Physical Review Letters. It is meaningless to compare IFs between disciplines. What you seem to be saying is that, even within the single discipline of physics, IFs can be very misleading. I am not surprised to hear it.

With that said, though, perhaps I should add that, in gitfiddle’s situation, it might indeed be a good strategy to compare the IF of Journal of Asian Earth Sciences to the IF of other general earth sciences journals.* Within a field, IF does provide a flawed, but still useful, guide to relative quality. The thing to avoid is comparing its IF to journals in other fields, or to much more specialized journals even within the earth sciences.

Another point to bear in mind is that journals that publish a lot of review articles will tend to have higher IFs than ones that publish only research reports, quite regardless of overall journal “quality.” (Unfortunately, whether or not a journal has the word “review” in its title is not always a reliable indicator of whether it is a reviews journal. :()

*I am assuming that Journal of Asian Earth Sciences is a general earth sciences journal published and/or edited in Asia, or perhaps has a bias towards publishing contributions from Asian scientists. If it is, instead, a journal that focuses on the geological structures and processes of the Asian continent, your comparison base would be correspondingly different.

It’s great that you thought to ask this question in the first place!

In addition to what’s already been said, you could also contact a local library and have them consult Ulrich’s. While it’s not perfect, it’s pretty reliable for noting if a journal is peer-reviewed or not. But certainly no substitute for knowledge of scholarly communication in a particular discipline.

But to definitively answer the OP’s question, just because something has “Journal” in the title does not mean it’s automatically peer-reviewed.

Journal of Asian Earth Sciences, according to Ulrich’s, is peer-reviewed. It looks like a Pergamon journal which is an Elsevier imprint, which means I have no problem believing that this is a peer-reviewed journal. (Sometimes I think I’ve been at this librarian thing for too long :D)

I think whether or not a journal is peer reviewed is pretty easy to find. At least in Chemistry, the importance of a journal is pretty easy too. Find the biggest names in the fields, and see what journals they publish in. How do you find the biggest names in the field if your not familiar? Just pick the researchers with the largest groups At least in Chemistry, the big names will have huge research groups (up to 40 grad students and post docs.) The ones with the most students, also have the most money to pay for them. They will also be the most relevant.

This isn’t to say that smaller groups can’t be significant, it’s just a way to find the biggest researchers and where they publish.

Of course, while Elsevier publications are reputable, they’re also really annoying about how they allow access. If I can get a piece of information from an Elsevier journal or from somewhere else, I’ll always choose the somewhere else, and I would never publish in one of their journals myself.