Is Peer-Review Valid?

It’s a tool; a useful, imperfect tool. Of course, we should keep our ignorance fighting glasses near to hand, but did your friend have a better tool to replace it with?

I have to tread carefully here, as the Mrs. is a behaviorist, but the presence of math and stat is not scientific validity per se.

After all, what impresses many people about astrology is all the precise numbers and formulae. I think it was Sylvester Stallone who was gushing about his mother’s fascination with astrology and said, “It’'s all mathematical and stuff.”

Then there’s economics, which is a closed universe made up of 50% numbers and 50% self-referential interpretations of them…

Just sayin’.

Having just finished contributing to a peer review, I feel I can honestly say that we combed that paper with a fine-toothed comb, and suspect that the authors are going to feel like they got hit with a sledgehammer.

And it was a pretty good paper, to be honest.

Peer Review is a good thing, and in my experience has been pretty thorough (I’ve had papers come back with suggestions, and even rejected).

This is not to say it is invariably the case. There have been other cases of outrageous papers making it through the gauntlet and emerged without any problems, most notably the Sokal case. There was another one in Eastern Europe this past year. ( A Serbian Sokal? Authors spoof pub with Ron Jeremy and Michael Jackson references – Retraction Watch )

Even worse, some papers that were not deliberate outrageous fakes have passed through with ludicrous assertions that were later caught by other scientists. I was on a panel discussing these two months ago.
For instance, the photographs in one paper appear to have been ludricrously badly photoshopped:

: http://www.chemistry-blog.com/2013/08/13/alleged-data-manipulation-in-nano-letters-and-acs-nano-from-the-pease-group/

: http://retractionwatch.wordpress.com/2013/08/16/nano-letters-retracts-chopstick-nanorod-paper-questioned-this-week-on-chemistry-blogs/

: Chemjobber: Does this picture look weird?

Or consider this paper, with its weird statement left in the appendix:

http://www.chemistry-blog.com/2013/08/07/when-authors-forget-to-fake-an-elemental-analysis/

The only flaw in the process I see is a lack of certification. Peer-reviewers should be required to meet some standard in order to review papers. The fact that a reviewer may have a PhD or similar degree does not seem to be sufficient. And journals should be audited that their review process also meets some standards, and let them put a ‘seal of approval’ on their cover. It is a bit surprising that this has not occurred already, or if it has, no one seems to be aware of such.

Each discipline should be able to set its own standards, but let the overall process be managed by the ISO or similar organization.

The publish or perish model also needs to be changed. I would rather have two or three books published by a researcher that shows the culmination of several years work, than twenty papers published annually that might tie together. That latter process might be valid for some lines of research, but not sure it is true for most. I know that some scholars do that - publish books summing up their previous research - but it appears to be the exception than the rule.

But that would probably require a serious change in how to fund science away from the grant model. I do not know what would be a better model, but the current system seems to be breaking down. Perhaps a certification process could reinvigorate it and increase its utility. If I was on a grant-making board, I would certainly give ‘certified’ researchers greater consideration, just as when hiring any other professional.

No, SOME Open Access Journals are shit. This one is shit, and published by a shit publisher.

The PLoS journals, BMC journals, Scientific Reports (Published by Nature publishing), eLife, PeerJ, and others have high standards and in some cases very high Impact Factors.

My lab is moving toward the goal of 100% Open Access publishing. But there is Open Access and Open Access. You need to be careful and only publish with established and reputable organizations.

Off topic, but as a working scientist this is an incredibly stupid statement, showing little to no understanding of how scientific research is typically performed.

What is the significance of the result of an astrological prediction? They have been tested - and failed. Let’s just say her favorite Christmas present one year was a Matlab manual.

As for economics, have you ever taken econometrics? It is nastier than any math I took as an engineer at MIT. The problem is not the models or the math, it is the inability to conduct a controlled experiment.

How do you propose to audit reviewers? People don’t pop out of the blue to do reviews, they get selected by the editor. In my experience (and I’ve read hundreds of reviews as an editor and program chair) the worst reviews are by the most senior people, who are busy and say things like “good job.” Grad students really care.
In our conference we track reviewers average scores in previous years, so that reviews of a 4 from a reviewer who averages 3 doesn’t kill a paper, and a review of 8 from a reviewer who averages 9 doesn’t help it.

Are you aware that books are not peer reviewed? Many publishers have an expert review the book proposal and chapters, but only one person, and you clearly can’t do the kind of job you do with a paper. (I’ve done a few - you even get paid. :slight_smile: ) Plus the publisher is on deadline, hopes to make money, and there is no competition.
BTW after my adviser died the one I got to replace him was writing a book instead of writing papers, where another professor in the same field was writing papers. One reason I switched universities was I knew my professor was not going to get tenure - I was right. So I don’t think you understand the process very well.

Have you ever done grant review? Didn’t think so. A big part of the proposal is qualifications. Not to mention that almost all the grants in the real NSF reviewing I did (before I got stuck with small business crap) were from people I and the rest of the panel knew personally and by reputation. If there was a certification process it would probably be done by some of the same people reviewing the grant.

Most areas are small when you get into the specialties. In fact, there is a move in some areas to remove all names from papers so that there is no halo effect.

Nor the target audience for a book, nor the circulation figures of the best specialized journals.

Wow - thanks for that - really made me smile. The person who photoshopped that has gigantic testicles (or ovaries). I just can’t imagine the confidence you must feel to just edit something like that - blithely unaware how obvious it appears to anyone who has every tried to - well Photoshop - anything.

Imagine what their 1040 (or country equivalent) must look like.

Fair enough, I haven’t evaluated all Open Access journals.

My comment was based on my experience of having received a dozen or so requests to serve on the editorial boards of open access journals that were in topics I have no knowledge of, and the fact that I receive on a seemingly weekly basis an invitation to submit a manuscript to a journal that, again, is on a topic that is way outside of anything in which I would be seen as an expert.

I did agree to peer review for an open access journal once, and the manuscript was below the level I would expect from a student’s work on a course paper.

The problem with economics isn’t really that it’s non-mathematical or lacks rigor. The problem is that, unlike (say) organic chemistry or astrophysics, economics is (often) a value laden discipline, which involves normative decisions about what sort of society we want to have. Science is supposed to be value neutral, and when you start blurring the line between factual statements and normative statements, you start moving away from the realm of science.

And yes, it’s of course true that economics (as well as history, anthropology, etc.) lack the ability to do controlled experiments. Societies aren’t independent replicates, they all influence each other, and it’s very difficult to control all the differences between them.

Can somebody give me the whistle-stop version of how the peer review process works for publication?

So… one’s all mathy but completely invalid, and the other is all mathy but… completely subjective.

That mathy stuff. You can do anything with it. :smiley:

Well, no, the problem is that it’s reliant on intensive mathematical modeling of entirely selective and subjective concepts. Deeply analyzed circular reasoning with less foundation than some forms of astrology.

I send a paper to a journal. An editor for that journal selects three experts in the field who would be expected to be fully conversant with the topic of my research and asks them to review my paper. They each read the paper and write up their suggestions and recommendations - they can say they think the paper should be published, or returned for minor or major corrections, or simply rejected. The editor then makes the final decision and forwards the reviewers’ comments back to me, the author. The review is anonymous. The author does not know who the reviewers are.

This is how it’s supposed to work.

Is there any possible way for an outside agency to check to see if this is actually happening?

I read a quote by Feyerabend once: “Prayer may not be very efficient compared to celestial mechanics, but it surely holds its own vis-à-vis some parts of economics.”

You’ve probably been spammed by Hindawi, Bentham, OMICS or one of the other numerous scam outfits operating (mostly) out of India. These aren’t Open Access Journals. They are scam vanity presses masquerading as journals.

A good list can be found here: http://scholarlyoa.com/2012/12/06/bealls-list-of-predatory-publishers-2013/

Open Access has NOTHING to do with Peer Review. All it means is that readers have access to the articles without having to pay a fee. No more, no less.