Editorial

Published in volume 26, issue 8, December 2016

This issue contains a real-time verification paper. A simplification of a real-time verification problem, by Suman Roy, Janardan Misra, and Indranil Saha, presents a technique to simplify the real-time verification problem. The paper describes a reduction from an infinite-sized state problem to a finite-sized state problem that can be solved with model checking. (Recommended by Alan Hartman.)

 


The Downward Death Spiral Review Process

Although the paper took a long time from initial submission to publication, the paper was in revision, not review, for most of that time. The peer review process has been one of the hallmarks of science for decades, if not centuries. It is far from perfect and often frustrating. However, it is essential. I view paper reviewing as having three primary purposes:

  1. The reviewing process filters good science from bad. With journal papers, that filter involves an editor-in-chief, a reviewing editor, and (usually) three reviewers. Without reviews, each scientist would need to do his or her own filtering—reading dozens of papers to find two or three worth reading.
  2. Reviews encourage authors to do their best work. How tempting would it be to take shortcuts if we did not know that experts were checking on us?
  3. Reviews help improve research and papers. Reviewers and editors collaborate with authors to turn initial submissions into better papers.

Without peer reviews, would journals and conferences publish everything submitted? Good research would be drowned out by bad, and good ideas would be presented badly. As editor, I send reviews anonymously to all reviewers and tell them it takes good reviewers to make great journals. That’s not just me using a polite phrase; it’s something I deeply believe.

As editor, reviewer, TPC member, and author, I see hundreds of reviews. Many reviews are excellent, most are good, but many are sub-par. Reviewing is an important skill that should be taught by PhD advisors, but that some scientists seem never to have learned. Some reviews are unprofessionally vacuous. Some contain personal attacks. I often ask vacuous or attacking reviews to be revised or find another reviewer. Some reviews are overly positive and miss some obvious problems.

Some reviewers think a few missing references is grounds for rejection. Some view their role purely as finding “bugs,” and identifying good research. Some reviewers dogmatically think if the initial submission isn’t perfect, it should be rejected. Some would choose a different experimental setup and think their disagreement constitutes a reason to reject. I outlined my thoughts on what should constitute rejection in a previous STVR editorial [2].

Far too many reviewers make up their minds while reading the abstract or the title, and then only look for reasons to support their decision. This may be unconscious or it may be intentional, but it is obvious to others during conference committee meetings and funding agency panels.

A serious problem that is especially common in conferences is what I call the “downward death spiral.“ What’s that? Two reviewers like a paper and one criticizes it. For some reason, many computer scientists think we get more respect by identifying problems rather than identifying good science. So the critical reviewer is seen as having “beaten“ the positive reviewers. To salvage their pride, they jump on the negative bandwagon. This process continues, feeds on itself, and the overall score of the paper gets worse and worse until a paper with good science gets rejected.

The downward death spiral is terribly destructive to our field. It’s one reason why we “eat our own“ during NSF panels, and why ICSE acceptance rates are ridiculously low. I implore every reader to avoid competitions to see who can be more negative. And I beg editors and program chairs to do everything you can to disrupt the downward death spirals. We should not reject good research with fixable flaws, we should try to help authors publish good research

[1] Jeff Offutt. STVR Policy on Extending Conference Papers to Journal Submissions (Editorial), Wiley’s journal of Software Testing, Verification, and Reliability, 26(4), June 2016.

Jeff Offutt
George Mason University
offutt@gmu.edu
18 October 2016