I’d like to dispute a few points about Chip Morningstar’s allegedly excellent description of academic insularity.
First, a Harvard undergrad would have to go out of his/her way to not be inside a classroom with a tenured faculty member even once. But even if the general point is granted–that at many research institutions a lot of undergraduate teaching is done by non-tenure track faculty, non-tenured faculty and grad students–how is that proof of the insularity of tenured faculty members? The assumption seems to be that more contact with undergrads would somehow count as a serious check on the quality of faculty research. But why would that be true?
Second, of course most peer-reviewed articles are read by other specialists in the field–just like most articles on hobbies such as stamp collecting are read by other stamp collectors. Why is that a concern?
Third, according to Chip decisions about tenure and promotion in the humanities are thoroughly insiderish because, “They are supervised by deans and other academic officials who themselves used to be professors of Literature or History or Cultural Studies.” That’s not true. Some of these deans and officials were never faculty and, more important, most involved in any given decision are outside of the specialized field. Convincing a tenured chemist of the value of the research of a classicist can result in a very complicated discussion–not at all a cozy wink and nudge between insiders.
Fourth, the claim that “occasionally a Professor of Literature will collaborate with a Professor of History, but in academic circles this sort of interdisciplinary work is still considered sufficiently daring and risque as to be newsworthy” is ridiculous.
All that said there are plenty of difficulties with the process of peer review but as others have said what would be better?
The experience of Saint Cad is, IMO, a fairly common one, but hardly proof of a process corrupted by cozy insiderishness.
I wrote an article surveying the amount of segregation in Los Angeles Unified schools. The paper was rejected, not because it was bad - indeed the three referees liked the topic . . . and then explained how they would have written the article if they were doing the research. Interestingly enough, they had three completely different takes on the research and I never resubmitted the paper because I had no clue which report to follow.
The law journals I have experience with work on the double-blind approach - reviewers don’t know the author of the paper, and the author doesn’t know who reviewed it.
Saint Cad, I have to agree with Capt. Ridley’s Shooting Party. In that situation you could (in addition to or in anticipation of submitting to another journal) 1) ask the editor of the journal for guidance (since that person might have an informed opinion on which of the readers offered the best advice) and 2) ask other scholars in your area for their opinion (assuming that you have access to a collegial network of this sort).
I may actually get back to it. One of the referees wanted me to use the paper as a springboard to discuss the 50th Aniversary of Brown v Board since I showed that despite the ethnic diversity and integration efforts of LAUSD that most schools are highly segregated. One of the referees wanted me to show a change over time so it was thinking that about 2010 as the economy worsens, I will take a look at the new census data and recompile all of the data and see how the segregation figures look.
But depth is part of breadth (well, not mathematically, but you get what I mean). You can’t claim both to want your experiences to be as broad as possible and then also say you have a distinct preference for one source over another. The more things an author knows, the less they’ll make mistakes - fair point, but you seem to be assuming that there’s some kind of point where a generalist knows enough about a subject to be accurate, at which point they can then move onto other subjects; that generalists can recognise the point at which knowing anything more (and becoming specialised) is superfluous, and further that you can recognise which generalists actually do this.
You started a thread a while back that suggested a rather significant misunderstanding of evolutionary psychology - I don’t mean your disagreement with it, which is fine, but a misunderstanding of even what the general consensus of that field was towards it. Your OP in this thread, too, assumes similarity between all fields of academic peer review. The one time you mentioned a specific field, you generalised it as being applicable to every field. You can analogise specialism as binoculars - useful for pinpointing detailed subjects, but it narrows your view; take the binoculars off, and you have a much wider range of vision. You seem to have neglected that you can’t see detail so much anymore. Generalism is good, but you need both generalism and specialism to make sense of things, and especially you shouldn’t assume accuracy when you have an admitted predisposition towards one of them.
I would also like to make this point; breadth of experience doesn’t mean learning from everything around you. Personal experience may broad, but it’s not that broad. Breadth of experience means learning from everything around everyone. And that’s why peer review is important; it means we can learn from everyone elses’ thoughts and findings, and they can learn from ours.
After all, if you don’t think there’s worth in making your ideas and findings known to others - why did you open a GD thread on it?
The fundamental misconception is that peer review is how the good research is distinguished from the bad (erroneous, fraudulent, etc.) research. Peer review probably won’t catch fraud, for instance. But it will (usually) keep fundamentally poor work out of the literature. Where there is active research on a question, bad research will be tested by others, and if it is truly bad, that gets pointed out relatively quickly. That’s one of the key functions of a researcher, and it’s more or less separate from the peer review process. I say more or less, because reading something that’s not quite right, but I can’t disprove, makes me try to replicate those results (if I’m also working on something similar). That happens with others as well.
Even granting, for the sake of argument, that your assertion here is true, that there are insular groups that communicate and argue largely with and between only themselves, then what, exactly, is the problem? What does it matter if some subclique disappears off into the corner? If nobody else is listening particularly to them, if their arguments don’t escape their ideological event horizon, then what possible difference does it make what they say to one another? Don’t you see that you’re getting worked up over a tautology? To wit, if nobody cares what they’re saying, then why should anybody care what they’re saying?
How can the worth of any system be judged except in comparison to alternatives? How can a determination of value be made in isolation? If I show you a man standing on an empty plain, how can he be said to be short or tall until another man stands next to him? It’s completely pointless to carve away the context and then attempt to assign, in a vacuum, some kind of necessarily arbitrary judgment.
In baseball, the hitter with a .400 batting average is a towering hero. But you don’t know that unless you know the rest of the game. If all you know is that this one guy stands up at the plate and gets a hit in just two tries out of five, and on the other three tries he fails and walks back to the bench, you could very well assume that he’s an incompetent, non-performing athlete. It’s not until you put his performance in the context of the game, and look at him next to his fellow players, that you have a meaningful understanding of the significance of his talent and skill. Without that context, a failure rate of three in five looks pretty bad. And it’s just that kind of context that you’re proposing be discarded.
I know you have an enormous vested interest in locating a crack in the edifice of the scientific process so you can wedge in your religious beliefs (one discussion of many), but leveling a half-assed nonsense argument against one of its methodological pillars is not the way to go about it.
Morningstar doesn’t say that teaching undergraduates would directly affect the quality of research. His point, as I take it, is that teaching undergraduates would put the professors in contact with people outside their direct clique. This would have advantages. For one thing, professors would get a reminder of how their beliefs look in the eyes of the population at large.
But why focus on undergraduates as the key group for contact outside the “direct clique”? Why would teaching undergraduates be more enriching than, say, volunteering for the kids’ school; playing bridge with a group of people that includes medical doctors, teachers and computer programmers; taking part in university committee work with people in different fields in the arts, sciences or engineering; or befriending the next door neighbor who is a lawyer?
If the answer to that question presumes that tenured faculty would try out their research ideas on undergraduates in ways that they would not do with their friends and colleagues in the situations just described I have two responses. First, I think it is the rare tenured humanities professor who never teaches undergraduates. Quite the contrary since the typical load for a research professor is 2/2 or perhaps 1-2-2 or 1-1-2 on the quarter system and there simply isn’t demand for that many graduate courses. Second, not only do professors seldom teach their research in undergraduate classes (or even graduate classes), they are actively discouraged from doing so.
So to the extent that the testing of “beliefs” simply means getting into the classroom and teaching one’s stuff at the undergraduate level, tenured faculty including at the very best research institutions do that all the time–and they get plenty of feedback including course evaluations to see how their ideas are flying with the under 21 set. But if the the idea is to see if one’s research would fly in the undergraduate classroom there’s a pretty strong consensus that that’s not the point of undergraduate teaching.
In fact the research and teaching aspects of a professor’s career are in many respects separate abilities–different parts of a single career that can complement each other but which also draw on different skills.
The debate that we’re having is precisely whether we should care what they’re saying. After the subclique returns from the corner with their peer-reviewed conclusions, should we trust them more than we trust people with a broad base of knowledge?
A person is a peer group of one. If your debate is comparing peer review to a single generalist, I would tend to say that the generalist shares all of the weaknesses you’ve claimed of peer review; what is more cliquey, more prone to assumptions of correctness, than the person whose sounding board is themselves?
I have offered alternatives for gathering and evaluating human knowledge, just not for running the academic world. As I’ve said, I’ve departed from the academic world and I no longer care what methods they pursue. I now gather my knowledge from a variety of sources, both academic and non-academic. The question is whether I am an inferior human being because I sometimes question what the academic authorities say. Some on this board say that I am. (Examples can be found in threads already linked to earlier on.) I say that I am not. I started this thread to see if any of my opponents would defend their viewpoint.
Let me try to explain where I’m coming from. There are some subjects that are a part of the human experience for the great mass of people, rather than just a handful of specialists. Examples are literatures, art, music, politics, economics, psychology, linguistics, and the general study of humans and society (which is split up into fields like sociology, anthropology, history, religion, philosophy, and so forth). Since we all experience a great deal relevant to all these fields, we all have some knowledge about these things. When there’s widespread agreement about certain facts and opinions in these fields, then we have some knowledge we can consider trustworthy; common sense, in other words.
Of course a common person does not know everything there is to know in each of these fields. The point is that he or she does have a decent foundation. Thus, an ordinary individual is not totally helpless when confronting an article or book relating to one of these fields.
Hence if a professor comes with a peer-reviewed article declaring that all men secretly desire to murder their fathers and marry their mothers, or that human beings are genetically predisposed towards polygamy, or that Finnegan’s Wake is the greatest novel in human history, or that pornography helps to uplift and empower women, or that a native speaker of a language cannot make mistakes, the average person is not obligated to treat such conclusions as the gold standard. Indeed, they would fall far short of their potential if they did. For those who believe that humans are obligated to use their reason and observation skills to the utmost, that would be an outright failure.
So when faced with a new premise from an academic specialist, the ordinary person often can judge it. And if the premise is flawed, the logical conclusion is that the rest of what the specialist says will also be flawed. ‘Garbage in, garbage out.’
Now there are other topics where the ordinary person isn’t so fit to judge. But if we see a lot of nonsense coming from the academic world (and there seems to be agreement that we do) then it’s reasonable for ordinary people to cast a skpetical eye towards all of it.
There are a few assumptions that you’re making, ITR, that are pretty flawed. First and foremost, you seem to be under the impression that people within and without of academia take any peer reviewed study to be fact and don’t ever question it. That is simply not true. Probably the greatest skill I learned in undergrad was to challenge a study, pick it apart and decide what is good and what is bad. Nobody ever told me that published=right, and I’ve seen many a person challenged when backing up their assertions with someone else’s shaky research.
I’m a linguist, and I can tell you for a fact no real linguist has ever made a statement like that. Of course people make mistakes when speaking, and many a career has been made studying how and why people make mistakes while speaking. What I think you’re trying contest in the descriptive point of view as opposed to prescriptivism. And I think this statement shows that the average person does not know much about linguistics. I know that, before taking any linguistics classes, my understanding of how language works was way, way off.
On the specialist vs. generalist issue, I prefer generalists because they’re a lot less likely to miss out on critical knowledge. A specialist, particularly one who moves in a clique of like-minded-specialists, easily falls into the patterns of the people around him. That means not just believing the same facts, but also the same methods for approaching a problem. For instance in social science fields the preferred method for finding what people think is to give a hundred or so freshman students a survey with a few yes-or-no questions. Academics may be vaguely aware that there are problems with this approach but it prevails because almost everybody in those fields agrees to use it.
A generalist, on the other hand, can approach problems with a much wider array of tools. If you want to know about mate selection, you could learn more from reading Pride and Prejudice than by asking college students to evaluate the attractiveness of people in photographs. If you want to understand the effects of income inequality, try reading The Grapes of Wrath.
No, we can’t. Throughout history people have assumed many things to be common sense when they aren’t. Experience is of course the only way we can actually know things, but by definition personal experience is the experience of one person, filtered by one mind. I don’t have much a problem with saying, for example, that limited knowledge coming from experience is acceptable for assuming a limited understanding, but it’s certainly not true in all cases. That many people believe something to be true just means that it is believable, not that it is necessarily truthful.
True, but if one accepts the possibility of a learned person in a field being wrong, for whatever reason, it only makes sense to assume in general that we are also likely to be wrong. And, if you’re assuming that personal experiences are an excellent method of gaining knowledge, it only makes sense to attribute a higher chance of truth to those who have had greater experience.
I concur. But, likewise, the average person is not obligated to treat their disagreement as the gold standard. And indeed the whole point of peer review is that you don’t just take one peer-reviewed article and base your views on it - you look (if possible) at as much of the body of work as possible.
I think you’re misunderstanding the claims of validity of peer-review from others. You seem to be assuming that people are saying that a peer-reviewed article is almost sacrosanct, an unquestionable essay of truth. But that’s not the case; the idea is that while a single peer-reviewed article is ideally a good way of objectivalising (is that a word?) the experience of one person or a team, it is rather the general conclusions of many peer-reviewed articles that should be considered trustworthy. Just as a report on a study will attempt to account for potentially problematic variables, the peer review process as a whole can help form a better general understanding, and provide backing and evidence for the better pieces.
Indeed, if the premise is flawed. But that presumes a correct understanding of the premise on behalf of the reader. As I brought up earlier, look at your evolutionary psychology thread; I would argue you misunderstood one of the most basic premises of the field.
I would disagree; where is it you’re getting this view of agreement from, out of interest?
And no, there’s not a reason, and I’ll tell you why; consider for a moment the amount of nonsense that comes from ordinary people ourselves. I can’t speak for you, but the amount of times i’ve heard of nonsense from the academic world is pretty hugely dwarfed by the amount of nonsense i’ve heard from us common types. Your method assumes that "ordinary people’ (and, I presume, personal experience) is a considerably more accurate measure of correctness and a considerably lesser producer of nonsense, to which i’m afraid I would have to say; Ha.
Ah, because of course what you take out of those books is precisely what I do.
A generalist has many tools but only one method of analysis; their brain. That and that alone will modify what their results will be. If I read Pride and Prejudice, and take away from it something very different to you - which of us is it that’s learned more? There’s no way of knowing. Well, except, perhaps, some kind of system wherein we share our knowledge of what we think we’ve learned. Perhaps you could write it up, so you chould share it with the most people possible. And i’ll write up my side, and then we could, say, put them together in some kind of set of writings dedicated to that topic, where people who are interested would know where to look. That could be an interesting idea. We’d need a name for the whole process, I guess.