This issue has two papers that offer useful ways to test software more effectively. The first, Verification support for ARINC-653 based avionics software, by de la Cámara, Castro, del Mar Gallardo, and Merino, shows how to automatically extract PROMELA models from C programs and shows results from model checking real-time avionics software. The second, Improved code defect detection with fault links, by Hayes, Chemannoor, and Holbrook, presents results from an experimental study of fault links, or relationships between types of code faults, and the types of components where the faults are located.
I recently had an interesting conversation with a younger colleague. The conversation started with deciding where to submit a paper, but quickly became more general. I eventually asked her: "Why do you want to publish this paper?" We came up with some interesting possible answers.
REASON 1—The Resume
A common reason to publish is to "add a bullet to a resume." University tenure and promotion committees often measure a researcher's value by quantity: How many bullets does your resume have? As one of my friends put it, when you get a university job, they give you a jar, and you get tenure when it's full. Of course it is easy to criticize this measurement but if we are measured with a flawed process, it is natural for humans to adjust to the measurement. After all, junior faculty cannot change the rules.
Bullet counting is also seductively easy. Promotion committees don't have to read the papers or judge the merit of unfamiliar conferences and journals. They don't have to know or understand anything about the research; they just have to count, like in kindergarten.
A slight refinement is to weight publications by the quality of the journals and conferences. The marble model incorporates this by assuming marbles come in different sizes, so some fill the jar faster. But the flaw in this approach is the weights are subjective and the measurement indirect. Conferences and journals change and individual reviewers and editors introduce variation. And of course, even the best journals publish poor papers and the worst publish good papers.
It is easy for a full professor with over 100 publications to say we should not publish a paper just to add a bullet to a resume. For a junior researcher, even one who agrees the measure is wrong, it is still rational to publish questionable papers in obscure outlets to fill that jar.
REASON 2—Local Measures
The second reason is to satisfy local measures. Some universities will not let junior professors teach graduate students until they publish two journal papers. Others require a certain number of papers before advising PhD students. Still others will not support travel unless the conference is on a specific list. I've seen some of these lists and they make no logical sense to me. They include conferences I would call irrelevant and corrupt, and exclude conferences I consider to be excellent. Still others only "count" papers that are published in journals whose Impact Factor is above a certain level (my thoughts on this subjective and badly flawed measure are in a previous editorial ).
REASON 3—Influence the Field
The first two reasons are personal because they help the author. They also have obvious flaws. Nevertheless, they are understandable because we must advance our careers. However, a broader view is important. Society pays researchers to work on intellectually interesting problems. So software engineering research should offer tangible benefits to society.
Our third reason to publish is to influence our field. A paper that gives other researchers ideas or results they can use to further their work, or that simply influences their thinking, has value that goes beyond our personal needs. Even a paper with flawed results can influence the field by giving others ideas or showing them directions they should not take. A paper whose results are useless, that only work in the lab, or have already been published elsewhere cannot influence the field.
The personal benefits from influencing the field are tangible but long term. These papers enhance others' opinions of our scientific ability. And over the long term, a scientists' most valuable asset is reputation.
REASON 4—Influence Practice
My final reason for writing a paper echoes the journal's tagline, "useful research in making better software." We are in an engineering field and the ultimate goal of software engineering research is to help real programmers build software better or cheaper. Effective research papers should influence industry. In his 2011 keynote speech at ICSM , Lionel Briand exhorted us to write papers that are "relevant," that solve problems industry cares about. This is deceptively complicated. It requires understanding problems and their context, identifying working assumptions, and a thorough analysis of the application domain.
"Effective" is an important word in this context, for an effective paper satisfies the goals of the author. If all the author cares about is adding a resume bullet, then any publication can be effective. But to effectively influence the field or industry, the paper must have three properties: it must add knowledge to the field, be presented clearly enough for people to understand, and be published in a place that people read (content, clarity, and dissemination). Even a good paper (with content and clarity) that is published in an obscure outlet will not influence the field or the industry. Unfortunately, a paper with good results (content) but a poor presentation (no clarity) may never be understood.
I recently had another conversation with my co-editor-in-chief, Rob Hierons. We were discussing journals whose papers are barely reviewed and whose papers are seldom read. Rob thought it sad that some colleagues publish in journals like this. Such papers will never influence the field or industry, but should we care? I believe the motivation for publishing papers is important. If young scientists knowingly publish to pad resumes or to satisfy university requirements, it is unfair to criticize. After all, these measures matter to them.
A broader reason to avoid such publications was given by David Parnas, who wrote that publishing just "for the numbers" hurts scientific progress . It is imperitive that young scientists be free to work on relevant research that solves industrial engineering problems.
Young scientists must take the responsibility to solve industrial engineering problems, not just publish for the numbers. They must also choose their outlets carefully to avoid publishing good papers in places nobody reads. And senior scientists must take responsibility too. Promotion committees must put more weight on impact. We must avoid creating conferences and journals that are not read. We must encourage clear presentation in papers. That is, we must reward content, clarity and dissemination.
 Jeff Offutt. The Journal Impact Factor (Editorial), Wiley's journal of Software Testing, Verification, and Reliability, 18(1), March 2008. http://www.cs.gmu.edu/~offutt/stvr/18-1-march2008.html
 Lionel Briand, Useful Software Engineering Research: Leading a Double-Agent Life, keynote talk at the 27th IEEE International Conference on Software Maintenance (ICSM 2011). http://www.simula.no/~briand/BriandICSMKeynote-small.pdf
 David Parnas, Stop the Numbers Game—Counting papers slows the rate of scientific progress, in Communications of the ACM, 50(11):19-21, November 2007.
24 October 2011