Editorial:
The h-Index Beats the Impact Factor

Published in volume 22, issue 1, January 2012

This issue has two papers with results on real industrial software projects. The first, On the testing of user-configurable software systems using firewalls, by Robinson and White, presents the "just-in-time" testing strategy for user-configurable software, and includes results on commercial software. The second, A case study in model-based testing of specifications and implementations, by Miller and Strooper, presents a case study of testing software specifications.

Before discussing the h-index, I have the pleasure to make an announcement about the journal. In 2012, STVR will have eight issues rather than the four issues per year we have had for the last 21 years. This will allow us to clear the backlog of papers, and support the increasing amount of research in the software testing field.

My last editorial [1] discussed the reasons why scientists publish papers, and emphasized publishing to influence either the research field or industry I also wrote an editorial [2] on the problems with the way journals are often evaluated by publishers and universities, the "journal impact factor" [3]. My opinion of this deeply flawed criterion has not changed but I recently learned about another measure that looks more promising.

During our promotion discussions this fall, all the candidates were presenting their "h-indexes." Most of us had never heard of this measurement, and were both curious and dubious. The h-index was proposed by J. E. Hirsch as follows [4]:

The index h, defined as the number of papers with citation number higher or equal to h

The h-index and the journal impact factor have an essential difference. The journal impact factor has a two year "window," that is, it only counts papers published in the last two years. The h-index has no window. It counts all papers published by an individual over a lifetime. The h-index also has some other interesting characteristics. It omits papers that are ignored by other scientists, thus encouraging scientists to publish papers on topics that matter and in places that are read. The h-index also rewards longevity and productivity in numbers of papers, but only the papers that other scientists read and cite. This means that the h-index cannot directly compare scientists who have been working for different lengths of time. A derivative measure might be the h-index divided by the years since the first publication, although I have not seen that used or proposed. Our promotion committee was told that a general rule of thumb is that successful scientists should expect to have an h-index approximately the number of years they have been working, excellent scientists should have an h-index of about 1.5 times, and h-indexes of 2 times the years working are very rare.

Of course, for a measure to be successful, we need to calculate it. Unlike in previous decades, the web makes this easy. And not surprisingly, free calculator tools are available on the web, most notably Google Scholar.

The h-index is designed for individuals, not journals, so it cannot directly replace the journal impact factor. But it could certainly be adapted. Appropriate modifications would have to be made to account for the age of the journal and the number of papers published per year.

So for me, this is the first measure of research productivity that I can support. One thing is missing, though. The fourth reason to publish from my last editorial was to influence practice ... the h-index does not measure this. Can we find a measure that does?

[1] Jeff Offutt. What is the Purpose of Publishing? (Editorial), Wiley's journal of Software Testing, Verification, and Reliability, 21(4), December 2011. http://www.cs.gmu.edu/~offutt/stvr/21-4-October2011.html

[2] Jeff Offutt. The Journal Impact Factor (Editorial), Wiley's journal of Software Testing, Verification, and Reliability, 18(1), March 2008. http://www.cs.gmu.edu/~offutt/stvr/18-1-march2008.html

[3] Eugene Garfield, The Thomson Scientific Impact Factor, http://scientific.thomson.com/free/essays/journalcitationreports/impactfactor/, originally published in the Current Contents print editions, June 20, 2004

[4] J. E. Hirsch. An Index to Quantify an Individual's Scientific Research Output, Proceedings of the National Academy of Science of the United States of America, 102(46):16569-16572, November 2005

Jeff Offutt
offutt@gmu.edu
30 November 2011