Editorial:
Globalization—Standards for Research Quality

Published in volume 24, issue 2, March 2014

This issue features three interesting papers, all of which offer real solutions to real problems, and demonstrate their success on industrial software. The first, A Practical Model-Based Statistical Approach for Generating Functional Test Cases: Application In the Automotive Industry, by Awedikian and Yannou, presents a new way to generate tests from models. The results include a tool that selects test inputs (the test generation problem), predicts the expected results (the oracle problem), and suggests when testing can stop (the stopping problem). They have demonstrated their approach on automotive software. (Recommended by Hong Zhu.) The second, A Novel Approach to Software Quality Risk Management, by Bubevski, offers an advance in managing the risk of software. Bubevski's technique uses Six Sigma and Monte Carlo Simulation and has been successfully used on industrial software. (Recommended by Min Xie.) The third, Automatic Test Case Generation From Simulink/Stateflow Models Using Model Checking, by Mohalik, Gadkari, Yeolekar, Shashidhar, and Ramesh, uses model checking to solve the problem of test data generation based on models. This technique has also successfully been used on industrial automotive software. (Recommended by Peter Mueller.)

 

I wrote about The Globalization of Software Engineering in a previous editorial [1], and followed up with a discussion of language skills to support globalization [2], and then uses of references and citations [3]. Another difficult difference that is affected by globalization is the expected standards for research quality.

Scientists usually learn about research in graduate school. The process has its roots in the middle ages, and is based on the ancient apprenticeship model [4]. After finishing our classes, we spend years as an "apprentice" to a "master," the PhD advisor. This advisor is responsible for teaching us the dozens of skills, strategies, and tactics required for a successful research career, including standards for the quality of the research. We also learn about research from other professors, by reading papers and reasoning how the research was conducted, but our advisor has primary responsibility.

In most cases, our advisors learned from their advisors, they from their advisors, and so on, sometimes back centuries. This is why we are so interested in our genealogies [5]. I have friends who can trace their "academic roots" back to luminaries such as Dijkstra, Poisson, Bernoulli, and Euler.

The model of research apprenticeships has a rich tradition in countries that have a long history of scientific research. Historically, many of these countries are in Europe and North America. However, part of globalization is that other countries, without a long history of scientific research, are trying to kick-start this process. When knowledge and skills tend to be handed down by word of mouth, this is, not surprisingly, difficult.

Thus, the globalization of research results in a large divergence in the standards for research quality. Who teaches new students? Who teaches the teacher? I see the effects of this divergence at STVR. We have a policy of desk-rejecting papers that are out of scope for the journal or that are low quality. We get papers that have no research results, for example, that explain an existing process or concept with an example. We get papers that have insufficient results for a major journal or whose results are not sufficiently original. And we get papers whose writing is so poor that the reviewers would not be able to understand the paper well enough to fairly assess the results.

For these papers, we send polite, regretful rejection letters that are as kind as possible. It is clear that many of these authors are bright enough, are hard working enough, and have sufficient technical strengths to carry out high quality research projects. Unfortunately, they simply have not been adequately prepared.

This rarely happens with papers from authors in Europe or North America, which have long traditions of research. Unfortunately, most are from the Indian sub-continent or China. These countries (among others) are aggressively trying to improve their economies, education, and research credentials. Professors are encouraged to submit as many papers as possible, and are often funded quite generously.

I see many specific issues with quality. I offer a few below, but this is obviously not a complete list. \begin{itemize} \item Ideas without validation, or insufficient validation \item Lack of motivation ... why is this research needed? \item Not enough connection to related research or putting this research in context \item In software engineering, research that is not useful to real engineers building real software in industry \item Insufficient theoretical contribution \item Inappropriate use of references \item Confusing organization and presentation \end{itemize}

In fact, I teach my own students, the majority of whom are countries without a long research tradition, all of these things.

It seems an important goal, then, is for countries without long research traditions to somehow absorb the institutional knowledge of how to perform and disseminate research from countries that do have these long traditions. How? It is clear to me that pressuring young scientists without adequate training to publish does not work. That method is frustrating to reviewers, editors, and conference program chairs, and must be frustrating for the scientists themselves. In fact, plagiarism is all too often the result of this kind of pressure. One of my favorite techniques is Brazil's "sandwich" program, in which PhD students are sent abroad in the middle of their studies to work with a research group in their topic. I have had the pleasure of hosting such students, and it is invariably productive and enjoyable. Paid sabbaticals are also effective. Hiring young faculty who received their PhDs from a country with a strong research tradition can also be effective, although sometimes it is difficult to lure the best and the brightest back.

A difficult choice that those of us from countries with a long research tradition must make is whether to take a competitive or cooperative stance with this aspect of globalization. That is, do we view other countries who try to improve their research as competitors, or do we cooperate by helping? I do not see research as a zero-sum game. If another scientist publishes a result before I finish the work (as Lionel Briand has done more than once), it's an opportunity to use those results and go further. It is self-evident that we will not run out of problems in software engineering in our lifetimes, so more help is better.

[1] Jeff Offutt. The Globalization of Software Engineering (Editorial), Wiley's journal of Software Testing, Verification, and Reliability, 23(3), May 2013. http://www.cs.gmu.edu/~offutt/stvr/23-3-May2013.html.

[2] Jeff Offutt. Globalization—Language and Dialects (Editorial), Wiley's journal of Software Testing, Verification, and Reliability, 23(4), June 2013. http://www.cs.gmu.edu/~offutt/stvr/23-4-June2013.html.

[3] Jeff Offutt. Globalization-References and Citations (Editorial), Wiley’s journal of Software Testing, Verification, and Reliability, 24(1), January 2014. http://www.cs.gmu.edu/~offutt/stvr/24-1-Jan2014.html.

[4] Richard A. DeMillo, Abelard to Apple: The Fate of American Colleges and Universities, MIT Press, 2013.

[5] Tao Xie, Software Engineering Academic Genealogy, http://web.engr.illinois.edu/~taoxie/sefamily.htm.

Jeff Offutt
offutt@gmu.edu
31 January 2014