Academics, what comes after peer review?

As I understand it, peer review simply means that the paper isn’t obviously hogwash and is of actual interest. Reviewers don’t try to replicate results etc. This may also be different in sciences like mathematics. So what are the next steps in the academic process?

Fight my ignorance here!

Reviewers don’t try, but part of their job is to make the paper has all the information in it so that the results can be duplicated … and that is the next step in the process … some folks try to duplicate the results … others try to find ways to refute the results … and then these folks publish …

Peer review, in principle, means more than just the review of papers before publication.

In principle, it means that your peers will also read the paper, and will comment, may seek to refute things they disagree with - usually by publishing something. If a claim is big, they will seek to replicate, etc.

The reality - for 99% of published work? The answer is nothing.
You put the paper on your CV and keep working on your next paper. Your publication will be read by a handful of workers in the same niche area, and otherwise largely forgotten.

A major claim in an important area will of course garner much more attention. If it opens up new areas, expect lot of people to spend time understanding the paper, its implications, and to start working from the new science presented. This will both explicitly and implicitly involve work that will validate the claims. Here you will see science in action.

Right now, the problems with the 99%ers academic research process are finally getting a tiny bit of notice. A few decades late, but better late than never. Academic appointments and promotions are nearly 100% publication based. Tenure appointments especially so. The pressure to publish is utterly dominant, and of course has evolved highly targeted behaviour. Submitting the smallest incremental change from the last paper that will get accepted, essentially multiplying any research work. Worst is forming small cadres of researchers that end up reviewing one another’s papers and grant applications.

I used to joke about setting up a completely bogus research area this way. Get a few mates in on it, set up a journal, publish crap, and see how far one could escalate the thing. Prize would be to get to the point where one member of the group got to review a grant application for another. At this point it could become self sustaining. It wasn’t too far off the truth in some areas.

The biggest problem from a science point of view is that you cannot get a grant to replicate someone else’s work, and you can’t get a publication from it either. You might get a publication from contradicting someone else’s work, but you take the chance that you can’t actually refute it. The risk reward is poor. So nobody actually checks. Not unless you have done something big.

You can always get a cat to writesome of the articles. :smiley:

I agree with most everything Francis has to say. I’ll just add a few additional points.

Peer review is a bit different for conferences compared to journals. For conference papers, you submit the paper and it is reviewed by usually 2-3 reviewers. The reviews then go to the conference committee that decides which papers should be accepted and in what capacity. Papers usually can be either full, short, poster or demo. Although not all conferences will have these categories, they almost always have full papers and posters. If your paper is accepted, you then try to address any comments made by the reviewers and re-submit. Assuming you’ve addressed the comments in some fashion your paper is accepted, you present the paper at the conference and it gets published in the conference proceedings.

For journals, it is an iterative process that can take several cycles of review and revision. You submit the paper to the journal and it is assigned to 2-3 reviewers. They review the paper and one of the editors will decide if they want to go further with the paper based on the reviews. If they decide to go forward, you then revise the paper and re-submit it. Now, here a lot depends on the reviewers. Many journals have an option that allows the reviewer to submit their review and then indicate whether they want to review any re-submissions. Most reviewers will say yes, not always. In any case, the resubmitted paper is reviewed again and you may have an entirely new set of comments to address especially if there is a new reviewer. This process continues until the editor feels the paper simply will never be publishable or it is accepted. If accepted, it is published. Reviews for journals tend to be more strict than for conferences.

Missed the edit window:

Full and short papers are presented by one of the authors in its own session to an audience. Posters are part of communal presentation, either in the halls or in a single room. Conference participants wander about looking at the posters (literal posters) and discuss with the presenter posters that pique their interest.

Wow, that’s even better than the time that my advisor slipped in a reference to Scotty. I’m still not sure if that got through because the reviewer got it, or because he didn’t.

BeepKillBeep, don’t forget that conferences also include talks, which may or may not be a big deal. I used to frequent a conference that consisted basically entirely of talks, where each presenter (regardless of status) got an equal-sized timeslot (somewhere in the vicinity of ten minutes, depending on the exact number of presenters). I’m sure there was some sort of vetting for those talks, but it was sufficiently transparent that I couldn’t tell you what it consisted of (certainly no reviewer ever asked me about anything before I presented).

I’ll occasionally get one of my group to replicate something I’m reviewing. Not often, but the experiments in my field (chemistry) can be quite fast, so if it’s something I’m very interested in I’ll do it. It’s much less feasible in other fields with more complex experiments. I’ve had people come to my group from much larger, manpower-heavy labs who’ve told me they spent a lot of time doing this, so I guess some PIs go in for this sort of thing more than others.

How you review depends on the level of journal - low level is usually just is this technically correct with sufficient detail to be reproduced. Each chemistry publishing house has an open access journal that is a giant bin for this sort of stuff - it’s real science [I’m not talking about the open access pseudo-science swamp that lies below this], just not that interesting, little to no novelty etc. For good journals, the review broadens around the technical merit to weigh up is it timely, exciting, impactful etc.

Interestingly, at the very highest level (Science, Nature etc) you get a bit of a reversal in that the editors reject the vast majority of manuscripts (>90% I think) without sending them out to review. So the fact you’re getting it to review in the first place means the editors consider it important so you would proceed on that basis. ie there’s less onus on you to build an argument why this is very significant work (or vice-verca), it’s more about saying how it fits into the field, can it be improved etc.

From the author’s perspective you can’t beat a good peer review process! Can be a real journey that substantially strengthens a paper. Unfortunately this improvement process doesn’t happen as often as it should IME, the best journals are so competitive that if something is a little bit flawed most reviewers are inclined to just bin it.

In the experimental sciences, when we get our reviews of the submitted paper back, the main editor may request additional experiments, ask for further analysis, critique our explanations and models etc. the paper maybe rejected outright or the editor will request those revisions.

I’m in the middle of this right now, completing a few experiments, so I can then resubmit it for publication.

In computer science, doing a talk at a conference is usually tied to having the paper accepted for publication in the proceedings. The exceptions would be:

  • Doctoral student workshops where students present their research to get feedback. Workshops papers are also reviewed but of course it is fairly lax since it is an unpublished work.
  • Invited talks, but those usually go to somebody of some prestige or industry.

I agree. I just had a paper accepted and one of the reviewers felt I didn’t talk about how to use my research in a practical setting. I had intentionally not discussed the research from that perspective because I didn’t think it would be very interesting to most readers. I did feel that after writing it and having it accepted that it was a better paper with the extra detail.

In my experience, a lot of “replication” happens as controls for follow-up experiments.

To give an example I’m familiar with, somebody discovers that the life span of nematodes can be extended by calorie restriction, and they publish their discovery.

That original phenomenon is replicated by another lab that wishes to understand the genetic mechanisms of lifespan extension, and tests whether manipulating genes A, B, or C alters the effect of calorie restriction. In each one of their experiments, they’ll typically measure the life span of “wild type” and mutant nematodes, given a normal or calorie restricted diet. Thus, they replicate the fundamental “calorie restriction extends life span” result several times. They find that gene B is required for nematode life span extension, and publish those findings.

Another group studies the role of gene B in the nervous system of the nematode, and wishes to determine whether specific neurons are responsible for life span extension by calorie restriction. They do a series of experiments where they measure the life span of wild type and mutant B animals with normal or calorie restricted diets, while stimulating specific neurons. Along the way they replicate the original “calorie restriction extends life span” finding, as well as the “and it requries gene B” finding, several times over.

Over the next few decades, the general process of replication and extension continues over many hundreds of interlocking papers. Along the way researchers might fail to replicate a specific result, but do further experiments to determine that some previously unaccounted for variable is the source of the replication failure.

There are plenty of dead ends, where a line of research ends where nobody else is interested in continuing from. Some of those dead ends, unfortunately, may be false positives. Somebody else might have attempted to replicate the experiment and failed, but never published because negative results aren’t sexy. Negative results are also a legitimately harder to establish, because the possibility that the replication failed due to experimenter error has to be excluded.

But yeah, the majority of publications are replicated once or twice, if at all. I’m working on a publication that I hope will be absolutely fascinating to three or four other research groups, and moderately interesting to a couple dozen more. Hopefully a few people will be interested enough to replicate and extend my work.

A joke! So, you didn’t … and that wasn’t … ? Oops, my mistake.

Sorry, mate.

Good stuff, guys.

Given the crap that closed-access journal publishers like Elsevier get up to, plus the importance of open-access journals and preprint archives in some fields, I’d be quick to point out that being open-access is not really correlated with quality one way or the other.

If you really want to get a feel for how downright Precambrian the traditional media world can be, look into the economics of journal publishing and try to figure out the actual value added by publishers in this day and age. Try hard.

This, for truth. When I find an article I need, as soon as I see it’s Elsevier, I know I’ll have a bitch of a time finding a copy I can access without having to pay for it.Many open access journals are high quality and we can’t dismiss them with a broad brush.

What happens to your research after publication is largely a question of how interesting it is. Some research articles are indeed of little value to anyone and will end up entirely forgotten. Some will be of direct relevance of many other labs and/or will have practical applications and the results will be reproduced many time. Some papers that established novel and important findings will become classics and will go on being read and cited for decades.

People often wonders if scientists can publish wrong results and get away with it because no one tries to reproduce them (or no one will say something if they can’t reproduce them). The answer is: “only if no one cares about these results”.

Reminds me a bit of sf publishing for the last decade or so - ten or a dozen writers who take turns playing editor and publishing anthologies of each others’ short stories.

Yeah, Elsevier was worse even than the “obscure Russian journal you’ve never heard of”, and if I followed a trail to an Elsevier paper, I always tried to see if I could get ahold of the same information in any other journal.

The conference I’m thinking of (the Pacific Coast Gravity Meeting) doesn’t publish proceedings. Though to be fair, I understand that it’s a rather atypical sort of conference.

Of course, like any conference, the real value comes not from any of the official sessions, but from researchers talking over things over lunch or at a party.

Yes, discovering I’ve followed a trail to Elsevier journal usually results in me swearing loudly in my office.