How many papers does the average academic write?

Is it possible to somehow add up the total number of papers written, divide by the number of academics and arrive at some figure which represents how many papers an average academic writes? Is it markedly different in different fields? Is there much variance in output between academics? Roughly what proportion are journal papers and what proportion are conference papers? Does the raw number of papers matter a lot these days or are you judged more on quality?

Considering that their graduate student monkeys are doing all the work and writing, none.

Errrm, I mean, it varies quite greatly.

I can tell you a bit about computer science. There is a lot of variance. The top researchers will publish hundreds of papers in their careers; professors at small liberal arts colleges will publish a handful. Conferences are more important in computer science than they are in most other fields, and are typically the first place someone will send new work to. Journal papers are frequently expanded and more detailed versions of conference papers.

Most papers have more than one author, and having one’s name on a paper does not necessarily mean that person wrote anything in the paper. They may have just contributed a key idea, or significant software engineering, or solved one particular piece of a problem. Or they may have contributed practically nothing, but put their name on it just because they’re the real author’s advisor.

I think this will vary substantially between fields, as conference papers are not especially valued in my area (chemistry), all that really matters is peer-reviewed papers. Don’t get me wrong, presenting at conferences is an important “indicator of esteem” and a major route for disseminating your research, but many conference papers aren’t peer-reviewed and don’t really mean anything. I understand that this may not be the case in other fields.

Whether to publish heavily in lesser journals or sparingly in quality journals is an excellent question whose answer will vary from person to person. Well, the answer is obviously to publish heavily in quality journals but that’s easier said than done. My own opinion is that quality is what its all about when considering just your own community, your peers are your harshest critics. One exceptional paper can literally be worth fifty mediocre efforts.
If you’re considering a wider area of research than just your immediate field, e.g. a multi-disciplinary grant application, or a fellowship application where you’re competing against many different types of scientists, than quantity becomes extremely important as people won’t be able to accurately judge the quality of your work.

In the arts, books are probably as important as papers. In the sciences, one good paper per research student in your group per year is very good - so with a group of say twenty students you might be aiming at 20 papers/year but be happy with ten good ones.

However collaborations can bump that number up enormously. I would often make a material (1 paper), but have it tested and applied by several people in the physics department (perhaps another 5 papers/year). So people with large collaborations can easily have their name on 50 or more papers a year even though they have only ten or students directly working for them. Some researchers are also in charge of teams of lesser academics, and so have effectivel 50 or so people under them, their output can run possibly into the hundreds/year if they insist on having their name on each

As others have noted, the average number of papers is going to depend a lot on the field. However, analysing such questions was fashionable back in the Sixties when large computerised databases of the scientific literature like the SCI first became available. In particular, the historian Derek de Solla Price got a lot of attention by writing several books - the posthumously expanded Little Science, Big Science … and Beyond is the most relevant - and lots of papers discussing the patterns that could be seen in the data. He concentrated on scientific papers, mainly using the SCI, so his conclusions were restricted to just science (and the “hard” sciences at that) and only considered papers published in peer-reviewed journals or very similar. So no conference papers published in the conference proceedings.

One basic rough result that Alfred Lotka had already published back in 1926 is Lotka’s Law: the number of people producing n papers is proportional to 1/n[sup]2[/sup]. Thus most scientists produce only a few papers, while a tiny handful produce lots.
Price suggested modifying this in a couple of ways. As it stands, the distribution has to be cut off in some way. He also argued that there’s a better empirical fit if you distinguish between two populations of scientists: those who write less that 15 papers and those who write more.
With these modifications, Price’s result was that on average a scientist will write about three and a half papers.

Other more detailed studies by him of who wrote what papers in successive years led him to propose a model of the literature in which it’s being written by two populations. One of these is a stable group of people who’re co-authoring about 2 papers a year and who do so for at least a decade. The other is a transient group who enter, write one or more papers and then disappear. Naturally, he explained this as grad students and postdocs who don’t get tenure. The latter, for example, presumably make up most of those who only publish a single paper and then vanish without trace. Writing a dozen or so papers was giving you a chance at tenure and then once past that hurdle you joined the stable population.
He didn’t (as far as I know) ever give a figure for the stable population’s average, but I’d guess that it roughly lay between 30 and 40 papers over their lifetime.

Of course, this was based on data back in the Sixties, though I doubt the overall pattern has greatly changed and a quick Google shows plenty of studies by people still using Lotka’s Law to model the basic distribution in different fields.

On quantity vs. quality, I agree with Myler’s comments based on my experience in physics.

In the sciences, the number of people who cite your paper is worth more than the number of papers you have. Many citation search engines (e.g. Web of Science, or whatever it’s called now) will tally up the number of times your paper has been cited, which is scientific equivalent of penis length.

The problem with citation indices is that they give no indication of the quality of the paper being cited, because they make no distinction between complimentary, critical, and completely dismissive citations.

For example, i might write…

“Smith’s research[sup]1[/sup] shows conclusively that [blah blah blah], and his important study needs to be considered closely in any investigation into this issue.”

…or, i might write…

“Smith’s research[sup]1[/sup] is the biggest load of dog’s bollocks ever written about [blah blah blah]. It demonstrates flawed methodology and spurious analysis, and should be discounted by every rational investigator.”

The citiation indices would given me one point for each of these, but i know which one i’d rather have.