"Sokal Squared" project exposes fraud in academia

I came across this from a segment by reporter John Stossel. Some academics in a Portland Washington college wanted to see just how crazy an article could be and see if prestigious journals, the journals academics are required to publish in, would pblish them.

It was called Sokal Squared and it worked. The sent out 20 papers. Of those 4 got published, 7 accepted, and 3 more in the process when the authors decided to stop the project.

The headlines read :‘Sokal Squared’: Is Huge Publishing Hoax ‘Hilarious and Delightful’ or an Ugly Example of Dishonesty and Bad Faith?"

HERE is the story covered by The Atlantis.

HEREis another on the response. It points out that if an established source of papers cannot figure out real research from a hoax, what is it worth?

One thing brought up is it seems these journals will publish just about anything as long as it fits into their mindset no matter how crazy the paper is.

What do you all think?

It’s true that there is some degree of laxity in the standards of some academic journals that is concerning.

To some extent it’s because in very specialized fields, real research might essentially be unreviewable.

My father was a mathematician and he was telling me that at high levels, it might be hard to find anyone with sufficient expertise to truly be able to evaluate a complex paper.

But more importantly, academic journals tend to be biased in favor of papers that reach “interesting” conclusions, because naturally they want their journals to be interesting and attract attention.

What I would quibble with here is this part –

“As long as it fits into their mindset” implies some kind of political bias or prejudice. While that might be true for some particular journals, I think the larger problem is, as I said before, that journal editors want their publications to be interesting, and are biased in favor of papers claiming results that attract reader attention.

From your HotAir cite:

The only problem with these two statements is that they are both wrong. It says a great deal about the fields that published the bogus papers, and if any empirical discipline would fall for it, then point to where this got published in a journal of astronomy.

If a field like gender studies is anything more than jargon and hurt feelings, then they would be able to tell the difference between these bogus papers and “real” work. But they can’t tell it. Publishing nonsense dressed up in a certain kind of cant is what they do. There isn’t, therefore, any difference between lesbian interpretative dance as a way of knowledge about the movement of the stars, and most of the rest of the stuff they publish. Empirical papers present empirical data. Sure, that data can be faked or wrong, but that’s not the same thing as a paper that says that white men should be chained to the floor so we can better examine the stellar spectra.

They got pranked and they fell for it. Sux to be them. Will it cause them to wise up and apply some kind of intellectual rigor to gender studies or fat studies? [del]Fat[/del] Differently sized chance (you should pardon the expression).

Regards,
Shodan

Well wait. Arent there are thousands of colleges out there with mathematicians?

These are supposed to be respected academic journals. Not some cashier aisle tabloid looking for stories about who is having sex with who and who killed who or 2 headed bigfoot.

Where are the standards?

Sure, but I think the point that was being made is that high-end pure (or theoretical) mathematics is dealing with concepts that are so complex that relatively few people in the field would even have the background to understand what the author of such a paper is talking about.

It depends what’s wrong with the article. If someone faked data, or lied about how they got it, that’s very hard for a journal to discover. I would expect a journal to vet whether the conclusions they drew were appropriate given the data, and whether the data was gathered in a reasonable way.

Standards would be factors like the significance and originality of the research, but it’s safe to say that if you are a referee and you cannot understand an article, that you should never recommend it for publication, and if you are the editor you should weed out obvious bullshit before it even gets to the review stage. That goes for mathematics, astronomy, and the social sciences as well as any field.

That makes no sense. Why would you publish something you (and the referees) find incomprehensible? At best you send it back for a re-write. Mathematics is a real science where real work is distinguishable from bullshit, and real mathematicians understand mathematical papers. Same as other disciplines.

The higher up you go in any field of study, the more specialized your knowledge is and the fewer peers you have who have your level of experience and understanding. The more advanced you get in a subject, the fewer people there are who share the background to even understand what you’re talking about.

This is a natural bias of any publication. If you run a publication, you want people to read it. Most respected publications get many more submissions than they will ever be able to publish, so they have to choose. It’s hard not to choose papers you think will bring you more readers. It’s not their intent to publish dubious material, but assuming two papers are equally valid, why not choose the one that’s more likely to be read?

This actually affects the entire academic/scientific research community. Knowing that having a more “interesting” result is more likely to get your work published in a prestige journal makes researchers consciously or unconsciously apply that bias to what they choose to study. Many might even abandon studies that “merely” affirm a prior study.

Now this is a big problem for science in general, because one of the keys to the scientific method is replicating results. But who wants to be the guy that is just replicating someone else’s results? That doesn’t get you the admiration of your peers, and maybe might not even get you published.

From the OP’s second link

In the “publish or parish” world they live in and depend on, they can no longer be trusted by the academic institutions they represent.

Did you mean “publish or perish” or is “publish or parish” also a thing?

And that’s an important point – the academic/scholarly system is not designed with an eye to rooting out outright fraud. The institutions generally assume that everyone is acting in good faith.

I hate spell chick.

Twitter thread from someone who was actually a reviewer on one of the papers in question. As an academic who has served as a peer reviewer for several journals, I would agree with what he’s saying here: most of us, as reviewers, want to be generous and constructive even if the work doesn’t appear to be good. We’re also assuming – and should assume – good faith on the part of the person who submitted the paper. Realistically, peer reviewers (who are doing this in their spare time for no extrinsic rewards other than maybe a line on their CV) are not going to follow up on every citation in a submitted manuscript to ensure that the cited work actually exists. In the case of work that purports to describe an observational study (like the dog park article), it’s not actually possible to verify everything – you have to take it on faith that the author actually did do such a study and observe what they say they observed. (And, had the data been genuine, it’s at least conceivable that it might tell us something interesting about human behavior or dog behavior or both; research on quirky and eccentric topics sometimes ends up having real value.)

Should gatekeeping for what gets published in academic journals be more rigorous? Yeah, it clearly should have been in this case, but there would be tradeoffs – fewer people willing and able to serve as reviewers, fewer opportunities for promising junior scholars to publish, a higher chance of rejecting research that might turn out to be genuinely groundbreaking and important. And, frankly, the penalty for publishing something that really shouldn’t have been published, in most humanities and social-science fields, is pretty low; the worst that is likely to happen is that your journal has to deal with a bit of embarrassment. Nobody is going to die from a bad article about gender theory, and it’s unlikely that anyone will even be seriously misled by one. (There IS no right answer to a question like “is it wrong to fantasize about a specific person while masturbating if that person hasn’t given you their consent?”; even though most of us would probably answer “no,” it’s actually kind of an interesting ethical question to ponder, and an article that seriously argues the case for “yes” might be worth publishing simply as a provocative conversation-starter.)

TL;DR, but I don’t really think this is as damning either for academia in general or for specific fields as people are making it out to be.

Yeah. This is one of the reasons why the “prestigiousness” of a piece of published research typically depends more on its citation-index scores than on the mere fact that it got published.

These PhD researchers worked for nearly a year attempting to make their fake papers look as legitimate and plausible as they could, including cites of masses of legitimate research and descriptions of (nonexistent) data sets, research subjects and statistical analyses. And now they’re trying to spin their productions as ludicrously obvious fakes with brief soundbite descriptions that they’ve tailored to seem as transparently ridiculous as possible.

The thing is, though, that it’s not that difficult to make almost any specialized esoteric research topic sound transparently ridiculous with a brief soundbite description. Remember that 2014 subthread where a poster was poking fun at government research grants awarded to a bunch of projects that a conservative op-ed provocatively described in the most absurd-sounding way possible, such as “Studying the effect of Swedish foot massage on rabbits”? And then got all mad when other posters pointed out that this research was actually a legitimate animal study on muscle recovery from injuries after exercise, and kept on railing at the other posters about “rubbing bunny feet”?

So yeah, I’m not totally convinced that it’s such a huge indictment of academic publishing in certain fields to say that they were fooled by painstakingly faked research by some experienced PhD researchers who worked their asses off for months to make their fakes look plausible.

The OP’s thread title would more accurately be something like “Sokal Squared” hoax project deliberately perpetrates fraud in academia, ostensibly to expose weakness of academia’s defenses against fraud. The only actual fraud they’re exposing is their own.

Also if say the a serious researcher had published a paper on claiming to have studied 10 thousand dog genitals, and it was later found out that he had totally fabricated his data, his career as a researcher would be basically destroyed. As others have said as a reviewer you have to take the researcher at his word, if they fabricate their results or use poor laboratory techniques you won’t know, but it should be pointed out that in general the publication of a novel result in a journal isn’t the final word as far as science goes, it is just the beginning. If the result is at all useful others will follow up on it or build off of it. If they find out that they can’t get the same results then they will publish that. If this happens repeatedly then the results of the first paper will be rejected by the scientific community.
I am quite certain that the conclusions reached by a large proportion of the published papers are in fact incorrect. But that’s just the way science works. The main problem is members of the media who take a finding in some paper that hasn’t been reproduced yet and run it as if it is now an established fact. When the result later turns out not to be reproducible it gives the impression that all science it wrong.

I also find it extremely misleading that the video included in the OP showed included in its introduction such journal as JAMA, PNAS, and Nature, yet none of these were among the ones to which the papers were submitted much less published.

If there’s an assumption of good faith then what are the reviewers doing? Any submission could contain suspect information and invalid results. It won’t be found by assuming good faith.

The problem here is that John Stossel is his own Sokal Squared. He’s a crank that people with a certain mindset buy into as being worth listening to, solely on the basis that he’s saying what they want to hear.

Assessing methodology, logic, clarity, etc.

Depends on the field and the submission. If it’s an unambiguously falsifiable assertion about a high-profile open result in mathematics, for example, such as Sir Michael Atiyah’s recent claim to have proved the Riemann hypothesis, there are going to be multiple internationally renowned experts in the field crawling over every comma with a magnifying glass. In other topics and fields, the focus of peer-review is more about whether the submission is original and non-trivial enough to be at least a marginally worthwhile contribution to the discipline.

Correct. When I peer-review submissions that purport to edit/translate/analyze some technical texts in pre-modern manuscripts, for example, I don’t demand to see incontrovertible evidence that the manuscripts actually exist with the location and provenance and content that the author claims for them. If some erudite prankster-fraudster decided to go to the trouble of forging such a manuscript and producing an entire paper plausibly purporting to study its (completely imaginary) content, they could totally fool me into greenlighting its publication.

I don’t feel that that vulnerability makes me negligent or incompetent as a reviewer of research in my field, or that peer-review standards need to be changed to ensure that such a thing could never possibly happen. On the other hand, if such highly-developed hoax projects or other deliberate frauds should become common enough to seriously impede the reliability and usefulness of our research corpus, then yes, we would have to become much more draconian about the gatekeeping.

It’s not the reviewers’ job to root out suspect information and invalid results. All they are there to do is to determine whether what is presented to them follows the general form of serious research or scholarship, to make suggestions for improvement or general critiques based only what is presented to them, and to question glaring omissions or flaws.

Reviewers aren’t fact-checkers, they aren’t detectives, they aren’t judges or juries, they aren’t there to validate the claims in the paper. That’s the job of the entire scientific community, to read the paper, and then, over time, test the results. The scientific method trusts that over time repeated investigation will yield more and more accurate results.

The authors’ reputation and credibility is their own business to protect. If they are intent on perpetuating fraud, they can do so, and once it’s in print those who read it will collectively give it credit or not based not only in itself but also the broader accumulation of knowledge in the field.