I won't name names. Technically, it is not even possible for me to name names, because the review process is doubly blind. However, as many of you know, between theory and practice there is only a quick Google search or a sly glance at the document properties, and very often you have a pretty good idea of who's behind the review. In this particular case, however, even a six-year old could have figured it out.
I've spent a considerable portion of the day wading through a badly written, poorly structured 60+ page maze of a linguistics paper. Why? Because a reviewer thought that the one footnote I had previously devoted to this publication didn't do it justice. As I was reading, however, it became clear that most of the other comments also implicitly referred to this paper. I needed to add some stuff on adverbs? Turns out the paper had a whole section devoted to adverbs. Discussion of languages X and Y was missing? Turns out languages X and Y were the centerpiece of this paper. More on cross-linguistic variation? Bingo! A subsection on cross-linguistic variation. It pretty soon became clear that 90% of the 'major issues' pointed out in the review could be paraphrased as Refer More Extensively To My Paper!.
Now, don't get me wrong, I'm a big fan of peer review in academia. I don't think there's a single publication of mine that didn't get better in some way thanks to reviewers' comments. (Even the review which prompted me to write this post contained some valid points and pointed out a number of weaknesses in the paper.) At the same time, however, oftentimes a paper also gets worse in some respects because a reviewer is trying to push his own agenda. So let me try and lay down Three Simple Ground Rules for Reviewing:
- Be specific: this is my number one pet peeve when it comes to reviews. You think a paper sucks? Fine, give specific, concrete, clearly formulated arguments to back that up. A whole lot of literature is missing? Give the full bibliographic info of at least one of those missing references. There's a problem in section 2? Give the page and line number of where the problem occurs. Vagueness in a review is not only utterly useless to the author, it's also annoying for the editor, who will have no way of knowing whether you're being vague because you didn't feel like doing the review and didn't invest the time into it, or whether the paper is really deficient in some fundamental way.
- Structure your review: editors don't have the time to read all the papers that are submitted to their journal. This means your review should help guide their decision and in this respect, structure is golden: a clear, concise overall judgment of the paper at the start of your review, a numbered, structured list of the major issues, and a possibly longer list with smaller issues. No lengthy summary of the paper—the editor can read the abstract—and no long swaths of rambling prose; this text isn't about (showing how brilliant) you (are), it's about evaluating the paper as objectively as possible.
- Be prepared to think along: this is probably the most controversial of the three, but I feel that if a paper starts out from a number of assumptions or axioms, the reviewer should think along inside that framework, unless he has (concrete, specific, clearly formulated, see point 1) arguments for rejecting those assumptions. Very often a reviewer simply disagrees with some assumption (because he adheres to some other flavour of linguistics) and starts bitching about it, trying to push his own agenda.
Call me naive, idealistic, or just plain stupid, but I think that a couple of simple rules like this—if properly adhered to of course—could really improve the reviewing process. It might even make it possible to get rid of the anonymity of peer review, but that's a whole nother can of worms.