Editorial

Published in volume 28, issue 5, July 2018

This issue contains two excellent papers that present novel techniques to improve reliability models and to decrease test suite execution time. Neural Network for Software Reliability Analysis of Dynamically Weighted NHPP Growth Models with Imperfect Debugging, by Pooja Rani and Ghanshaym Mahapatra, combines three well-known non-homogeneous Poisson process models with a neural net to improve reliability analysis. (Recommended by Min Xie.) Speeding up Test Execution with Increased Cache Locality, by Panagiotis Stratis and Ajitha Rajan, presents algorithms to reorder the execution of large test suites to reduce the number of misses in the cache, and therefore decrease the execution time. (Recommended by Yvan Labiche.)

 


What is the value of the peer-reviewing system?

Scientists have been publishing research papers in scholarly journals since the 17th century [1]. And the peer review system, flawed as it is, has been used the entire time. I’ve written about peer reviewing before, including how to do it [2], why should we do it [3], how not to do it [4], and the benefits to reviewers [5]. Here I ask an existential question: What is the value of peer reviewing?

As a journal editor, I see many papers that our readers never do. Papers that are plagiarized, papers that have no scientific content, papers that have very little scientific content, and papers that are written so badly they are incomprehensible. We desk-reject almost half of STVR’s submissions. We even have template emails for the different cases.

I also see papers that look like good scientific papers, but are significantly flawed. The experiment was poorly designed, the ideas do not work, somebody else already had the same idea, or the evaluation did not convince the reviewers. Most scientists do not see these papers, or only see the few that they review.

The papers that are published are almost always better than the original submissions, often much better. Very few papers are accepted on first submission, and most go through two rounds of reviewing. I know revisions are frustrating to the authors, but they help. I’ve authored almost 60 journal papers, and every single one was improved by the reviewing process. Many of them were much better because of the reviews.

I think many non-scientists imagine the peer review process to be subjective and full of politics. Of course, we are all human and susceptible to bias, but with 3 reviewers, a reviewing editor, an editor-in-chief, and multiple revisions, the bias almost invariably gets washed out by intellectual merit. And that is why editors are neither autocrats nor accountants. We are arbiters who help the reviewers find good papers among the bad, and help the authors find good results that sometimes hide inside bad presentations.

So the peer reviewing process has two primary purposes:

  1. Filter papers that nobody wants to read
  2. Improve papers that are worth reading

Sure, the Internet allows us to publish everything anybody writes. But if we published everything, how could busy scientists decide which among hundreds of papers to read? The peer review process is absolutely essential to scientific progress, which could be why it has lasted four centuries.

[1] Publish and don’t be damned: Some science journals that claim to peer review papers do not do so, The Economist, June 2018.

[2] Jeff Offutt. Standards for reviewing papers (Editorial), Wiley’s journal of Software Testing, Verification, and Reliability, 17(3), September 2007.

[3] Jeff Offutt. Why should I review papers? (Editorial), Wiley’s journal of Software Testing, Verification, and Reliability, 17(4), December 2007.

[4] Jeff Offutt. The Downward Death Spiral Review Process (Editorial), Wiley’s journal of Software Testing, Verification, and Reliability, 26(8), December 2016.

[5] Jeff Offutt. Is Paper Reviewing a Transaction, a Service, or an Opportunity? (Editorial), Wiley’s journal of Software Testing, Verification, and Reliability, 27(4-5), June 2017.

Jeff Offutt
George Mason University
offutt@gmu.edu
28 June 2018