Science gets its own Sokal hoax?

According to Nature:

Publishers withdraw more than 120 gibberish papers.

The publishers Springer and IEEE are removing more than 120 papers from their subscription services after a French researcher discovered that the works were computer-generated nonsense.

Over the past two years, computer scientist Cyril Labbé of Joseph Fourier University in Grenoble, France, has catalogued computer-generated papers that made it into more than 30 published conference proceedings between 2008 and 2013. Sixteen appeared in publications by Springer, which is headquartered in Heidelberg, Germany, and more than 100 were published by the Institute of Electrical and Electronic Engineers (IEEE), based in New York. Both publishers, which were privately informed by Labbé, say that they are now removing the papers.

Among the works were, for example, a paper published as a proceeding from the 2013 International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering, held in Chengdu, China.

So: 1. Is this as embarrassing to the publishing organizations in question as it appears to be?

  1. Is this critique of the scientific publishing process valid?

First-This has got to be the greatest prank MIT students ever pulled, bar none.

I think this is very embarrassing to those two publishing firms, and deservedly so. I would hope other scientific journals are being looked over carefully, and I hope this leads to better verification procedures in the future.

Nice catch, ITR champion.

I know a bloke who is a professional (freelance) proofreader of scientific articles. He says he’s seen a lot of real tripe go by, and it isn’t his job to correct it. Things like real (?) scientists who don’t know calories from kilocalories.

I don’t think he’d pass along gibberish… But if it’s really, really good gibberish – done with the best “travesty generator” software – maybe?

Anyway, great hoax, and shame on the journals’ editors.

ETA: the latest issue of Scientific American had an article about the use of text-recognizing software that has detected more instances of plagiarized articles than people had expected. (It was always known, of course, that there was some plagiarism; it just wasn’t quite known how much.)

Here is how things work with IEEE - which is primarily a professional society, and a non-profit.

To create a conference or workshop you submit a proposal with expected attendance, a budget, and all sorts of stuff, which gets accepted if you plan to at least break even with some margin. I’ve run some big conferences and founded a few workshops, so this is no big deal. It has nothing to do with publication quality.

The workshop organizers are responsible for soliciting and reviewing papers.
There are no checks from IEEE. there are thousands of workshops and conferences put on every year.

If the proceedings are to go into the IEEE Electronic Library, you use a Java tool them provide to create a packing list of the contents and pdfs of the front matter and the papers. There is a tool to check that the pdf meets their standards - not the content.
You send in some CDs of the packing list and the proceedings, and it shows up on the searchable IEL.

So, if you have clueless or corrupt conference organizers (and note that all this happened in China) it is quite plausible that junk can get in. I don’t generally submit papers from workshops I’ve done to IEL since the work is way too preliminary. But I could have.

This is why those expert in the field know the hierarchy of conferences and archival journals. If anyone gets in trouble for believing what was published in some unknown and first time conference in China, they have only themselves to blame. If this junk got into a magazine or Transactions it would be a big deal.
I assure you that tenure committees give almost no credit for this kind of thing.

Workshops are designed to be a place to present really preliminary work, and for you to get feedback on it. If it is good you submit it to someplace better.

So no big deal. I just wonder that this guy had so much time on his hands that he was looking at these write-only conferences.

The IEEE publications are not journals. I don’t know anything about the people who put on the conference where this was published, but it wouldn’t surprise me if they couldn’t read English well enough to know tripe from reality. The English from some very legitimate authors is not so hot.
My IEEE dues would at least double if someone looked at every paper going into IEL - and the lag time would be horrendous. (You are supposed to submit your material a month after the conference, but I doubt many people make it.)

The Sokol hoax was interesting because he got his paper into a top rank journal. This has nothing to do with that, from the IEEE side.
Think of it this way - getting a fake story into the NY Times is a scandal. Getting it into your local free advertising rag is no big deal.

Just to reiterate what others are saying - these aren’t peer-reviewed articles. They’re not being checked like actual papers are. These are a different beast entirely. It still shouldn’t have happened, of course, but the inevitable cries of “we can’t believe anything science says rawr rawr rawr froth froth” will not be justified.

There are also different flavors of peer review.
For journal articles several reviewers get a good bit of time to review the papers, and the most common outcome is revise and resubmit. When as a reviewer I see the second go-round, it includes reviewers comments and the author’s response. I just finished one with a third go round. I can be mean.

Good conferences have reviewers handling multiple papers in a matter of weeks. The program chair or program committee member looks at the reviews, makes an accept/reject decision, and that is it. The author gets mandatory revisions but no one really checks.
Workshops usually have the program committee voting on the papers, sometimes just on summaries. The accept rate for these is often very high - much higher than conference proceedings and journals.

My professor friends tell me that the quality of a publication for tenure is directly related to accept rate. I think one would love a conference which rejected every paper but his. The conference in question here might have had an accept rate of > 100%.

And then there are posters and invited papers and special tracks …

As interesting as this is, it’s no Piltdown Man, which was a big embarrassment to actual science. Still it got self corrected after not too long a time.

No. See above

The Slate article is crap. I’d reject it. :slight_smile: IEEE probably has 85 conferences a day. So 85 in five years is not exactly a flood.
Second, there are academic publishers who make big profits on journals. IEEE is not one of them. Journal prices are very reasonable, and now with IEL many people get their stuff with institutional subscriptions. I’m on the editorial board of an IEEE magazine and I wish we were making a bundle. The more prestigious and specialized a journal, the smaller the circulation. I’ve seen these numbers.
I’m sure the guy can hack things to show up high on searches. Big deal. Citation counts matter, not position in Google searches. And anyone reading his “papers” which came up would ignore them anyhow. You have to be well known to publish a very general paper - inside specialties most people know each other, at least by reputation, so he would be ignored.

A not very good hack. Good enough to impress people at Slate who have no idea how things work, but no one else.

It just shows that you can’t trust toothpaste, capacitors or conferences coming out of China.

It’s been a long time since I paid attention to any IEEE proceedings-- would you characterize this as more engineering or science? IOW, are we talking about the application of known scientific principles or are we talking about the discovery or validation of new scientific principles?

Judging from the title, I’d guess far more engineering. But the title is kind of broad - it sounds like a conference on stuff.
I have my suspicions on what happened, but they are just gut feelings based on my experience.
I can look up the proceedings on IEL, but I have a feeling my eyeballs would melt.

And the other thing we should keep in mind is that nobody who knows science is going to say that the peer review system is perfect. But it’s better than the alternative, which is either folk science or The Bible.

Sorta like democracy–except in Egypt. :slight_smile:

So maybe not that critique, but what are the critiques of peer-review in real journals? I’ve seen a few abuses. The Slate article alludes to number of publications as an index of productivity, and I think there are researchers out there that focus on this to the detriment of producing quality research.

One example:

My postdoc adviser was all about getting publications and would shop manuscripts to multiple journals, working his way down the line until he found one that would accept a piece of shoddy work other journals had rejected. Not of itself uncommon–not everything is going to be accepted in Science or Nature or the more regarded specialized journals or even the mediocre ones.

But one accepted paper that comes to mind, done before I joined the lab, could not have performed some of the experiments as described [validation of single qPCR Taqman products by melt curve analysis—explainable by the fact most in this lab didn’t know the difference between Taqman and SYBR Green qPCR and probably thought saying they did melt curve analysis sounded good] and described in the stats section that “n=a pool of samples”. Still not sure how you plug “pool” into a t-test to get the p-value they reported.

Basically, the reviewers (if really reviewed at all) were not very knowledgeable and that’s what he was searching for. When I brought this up, he said that was why they submitted it to that journal—it was the only one they found that would accept it.

Now that paper comes up in PubMed and if you don’t read the materials and methods section, you may not realize what crap it is.

I’ve seen other examples, like multiple submissions at the same time or sending manuscripts to colleague editors instead of the journal itself.

Hmmm, IEEE…didn’t they publish some of the TARg & Puthoff SRI nonsense without a disclaimer like Nature did?

Journals have gaps. Peer review isn’t utterly perfect. Things shake out in the long run.

Thanks for the clarification; it makes the story much less of a big deal. Takes some of the fun out of it, too. :slight_smile:

I’ve seen other conferences and conventions where there is an open call for papers, and the papers aren’t refereed or edited, just collected. If this was that sort of affair, it seems less like a “hoax” and more like a mere waste of time.

Still…amusing!

Whoa! Multiple submissions will get you zapped in my world. And since it is such a small one, the chances are that one reviewer or editor will see both submissions. It has happened.
What do you mean colleague editors? I know the authors of nearly every paper I review, sometimes very well. I still give them a hard time. I have a book review column also, and I also know almost every author whose book I review.

One big problem of peer review is that it is often hard to find qualified reviewers, especially for a paper out of place in a journal. No one gets paid for reviewing, and reviewers who write more than “good” are in short supply. I get the next paper for a transactions literally minutes after I turn in a review.
I guess everyone shops papers, which is why lower prestige journals which get the rejects are lower prestige.

I used to get spammed by one of these. I wrote back some nasty emails which were ignored, and this was the kind of conference the MIT guys got their paper accepted to.

Examples of each:

Manuscripts would have to have been submitted to multiple journals at the same time. One paper was rejected by one journal yet was published electronically in another journal four weeks later—pretty amazing turnaround time if it was a new submission, since it usually takes months to submit, get reviews, address them, resubmit, and have it come out. Another paper, which another former lab member pointed out to me after I left (“Hey, search PubMed for so-and-so and see what the most recent hits are.”), showed up in as an ePub on PubMed in two different journals in the same week before one ePub mysteriously disappeared a couple days later. I didn’t have access to the full text, but the paper titles, abstracts, and authors on PubMed were identical except for the journals. It was so suspicious that I downloaded the citations and have them on a disk somewhere.

My adviser was a section editor of a decent journal. I would be instructed to email our manuscripts to his colleagues (such as previous co-authors/collaborators) in similar positions with other journals instead of submitting to the journal website itself. When I asked why, he said they also send their manuscripts to him directly and because “it’s quicker and easier” to get it accepted. It was about cherry-picking soft reviewers for each other–maybe two friendly reviewers instead of three impartial ones.

As bullshit as this is, I’ve reviewed papers and the shit that gets rejected outright can be atrocious. My first real review was a total softball–text descriptions of figures didn’t actually match the figures, 1ml of a 2x master mix solution diluted in 5ml, misused statistical tests…unbelievably sloppy manuscript.

I’ve only reviewed one paper (I just finished my graduate degree last May) but it was a hell of a paper. It was so terrible I wondered if it was a joke. It read like something that two not particularly bright college students in a freshman biology class thought up as a class project after too many bong hits.