Editorial

Published in volume 26, issue 2, March 2016

This issue contains three outstanding papers, two that contain strong theory and show promise for immediate practical application, and another that can inform a new generation of researchers. The first, A Lightweight Framework for Dynamic GUI Data Verification Based on Scripts, by Mateo, Ruiz, and Pérez, presents a way to integrate verification into a GUI during execution. The runtime verifier reads verification rules from files created by the engineers, and checks the state of the GUI for violations while running. (Recommended by Peter Mueller.) The second, Model-Based Security Testing: A Taxonomy and Systematic Classification, by Felderer, Zech, Breu, Büchler, and Pretschner, surveys and summarizes 119 papers on model-based security testing. This paper should become the first entry port for anybody doing research in the area. (Recommended by Bogdan Korel.) The third paper, Generating Effective Test Cases Based on Satisfiability Modulo Theory Solvers For Service-Oriented Workflow Applications, by Wang, Xing, Yang, Song, and Zhang, address the very technically difficult problem of testing service-oriented applications developed with WS-BPEL. Many execution paths in WS-BPEL applications are infeasible. This paper addresses the problem and shows how to generate tests based on finding test paths from embedded constraints. (Recommended by Bogdan Korel.)

 


How to Revise a Research Paper

A well-crafted process to revising a journal submission is crucial for eventual acceptance.

Although the initial reaction to the reviews may be negative, it is very important to be proactive and positive. Researchers, even world-renowned, will always be criticized, fairly or not. We must be able to respond to criticism in positive ways. “The reviewers were blind and close-minded,” a common complaint, may be valid—however, expressing that doesn't help achieve the goal of publishing a paper. Authors can't make reviewers or editors smarter. This is yet another situation where we must strive to change the things we can, and accept the things we cannot.

In this editorial, I walk through the process that I have used to revise journal papers for two and a half decades.

I start my revision process with three initial steps. First, I look at the decision. If it is an “accept,” “minor revision,” or “major revision,” I celebrate. I view a major revision as an “accept after lots of work.” For emotional reasons, I put off reading the reviews until later that day or the next. Even a decision of minor revision may contain things that are bothersome. A reaction of “how could the reviewer be so blind?” is common. Several days later I return to the reviews for a deep, detailed analysis of what they said.

Being proactive is essential. If the reviewers misunderstood, how can the author change the writing so that reviewers will understand the next time? If the reviewers weren't satisfied, can the work be better motivated? If the reviewers did not believe the work truly solved the problem, can the problem be restated? Like software, no paper is ever perfect. Like testers, the reviewers' job is to help the authors improve the paper.

Recently a co-author and I got reviews asking for a major revision. The revisions asked us to throw the previous empirical study out and start again. (As an editor, I would define that as a reject, but that's another story [1].) The reviews were strange—as if they read the wrong paper. They reflected neither the paper's goals nor its results. Three reviewers completely misunderstood the paper! We finally found a key review comment that helped us realize we had buried our important goal inside a subsection in the experimental design ... in a formula! Our title, abstract, introduction, and research questions all sent the reviewers in the wrong direction.

That's an extreme case, but it illustrates the main point of response letters. Take responsibility! After all, authors want a paper accepted, but reviewers don't care. They simply want to write a competent review with minimal effort. And they shouldn't care. If they did, they would be conflicted.

The first analysis of the reviews should identify all substantive comments. Some reviews have them neatly organized and numbered; others have long paragraphs that make several comments. As an author, your goal is to make sure you read, understand, and address every comment, so organization is essential. My process is to print the reviews and start by numbering every individual comment (“R1-1,” “R1-2,” ... “R2-1,” ...). Some are related, so for example, if reviewer 1 and 3 make the same comment, I write “R1-4, see R3-7,” and “R3-7, see R1-4.” In my first pass, I try not to consider changes. I just want to understand the comments.

My next pass is usually done in a meeting with my co-authors. This is a terrific opportunity to teach students some of the subtle details of high quality research. We connect each comment to a location in the paper. Some reviewers make that easy by identifying the page and line number explicitly, others do not. We write our own comment number on a printout of the paper, and if it's not there already, we write the page number on the review.

Next my co-authors and I consider responses. With a different colored pen, we make a short note on the paper and the review about the planned change. The note on the review is usually very brief; for example “fix,” “reword,” or “ignore.” All reviews have a few comments where we're just not sure what to do. We can’t ignore them, but we can postpone. We simply write a question mark to help us remember to come back later.

Next, we assign jobs to the co-authors. The lead author usually takes responsibility for the simple jobs, and the more complicated rewriting goes to the expert on that particular part of the paper. The most experienced writer usually works on the abstract and motivation. Just as with the initial version, whoever has the best grammar and writing skills should make the final pass.

Next, we modify the paper and develop the response letter simultaneously. I will discuss the response letter in detail in a subsequent editorial, for now just assume that it lists every comment and has a direct response for each.

As we change the paper, we check off the review on the paper copies (with a different color), and draft the responses. Sometimes drafting the responses makes it easier to change the paper, sometimes the change makes the response easier to write, and sometimes it doesn’t matter which is done first.

After each co-author takes his or her turn, we are usually left with a few troublesome comments. Sometimes the response is simply “no,” sometimes the response is “good idea, but we can’t do that,” and sometimes authors have to go back to the laboratory. Sometimes making the other changes makes the most difficult comments easier to address. It certainly helps when editors made it clear which comments were mandatory. When the editor's note says “Mandatory changes are X, Y, and Z,” then not addressing them makes it very likely the revision will be rejected. If the author chooses not to make a mandatory change, it is imperative that the response convinces the editor that the change should not be made. Ignoring a mandatory change will certainly lead to a rejection. Arguing against a mandatory change is still risky, so be prepared if the editor and reviewers are not convinced.

Revising papers is not easy but is essential. Viewing it as collaboration between the authors, the reviewers, and the editor can make this process easier and more effective. In a very real sense, all parties have similar goals: To publish good papers in the journal.

Criticism is painful, but it is also essential. My advice is to ignore unfair criticism, pity those who write ignorant criticism, and learn from justified criticism. If we have to avoid criticism, Aristotle had some excellent advice: “Criticism is something we can avoid easily by saying nothing, doing nothing, and being nothing.”

I would like to acknowledge Mary Jean Harrold for help with this editorial. We developed this process together in the first years of our careers. Naturally, this process was influenced heavily by our PhD advisors, Mary Lou Soffa and Richard DeMillo, as well as dozens of collaborators, most notably Paul Ammann.

[1] Jeff Offutt. Standards for reviewing papers (Editorial), Wiley's journal of Software Testing, Verification, and Reliability, 17(3), September 2007

Jeff Offutt
George Mason University
offutt@gmu.edu
22 January 2016