Editorial:
Is Software Testing Essential or Accidental?

Published in volume 19, issue 1, March 2009

This issue contains three papers. Automatic instantiation of abstract tests on specific configurations for large critical control systems, by Flammini, Mazzocca, and Orazzo, proposes the interesting idea of creating abstract tests from system requirements and automatically instantiating them into concrete tests. Transition covering tests for systems with queues, by Huo and Petrenko, proposes another technique to automatically generate tests, this time for concurrent systems. Generating input data structures for automated program testing, by Chung and Bieman, studies another aspect of automatic test data generation, describing how statements are connected in terms of constraints that can be solved to yield test inputs. The common theme in these three papers is that they are trying to find automated ways to solve the hardest essential problem in testing software—generating test input values.

When I was a college senior, a manager from IBM visited one of my classes. I will never forget his message. He said computing was in its infancy and we should expect great changes during our careers. He also said programmers spent 90% of their time on activities that were not directly related to the problem the software was trying to solve. The most time-consuming activity was debugging, followed by things like keeping decks of punched cards in order, struggling with odd syntax and even odder semantics in poorly designed languages, wrestling with computers that had too little memory, convincing poorly designed operating systems to run our programs the way we wanted, and understanding what customers wanted.

In graduate school, I read Brooks' famous paper "No Silver Bullet: Essence and Accidents of Software Engineering" [2]. His paper echoed the talk from the IBM manager and connected those issues with the philosophy courses I took in college. If you haven't read it, Brooks philosophized that software developers solve accidental problems, which are "difficulties that today attend its production but are not inherent," and essential problems, which are "difficulties inherent in the nature of software."

The IBM talk and Brooks' paper sparked my interest in software engineering, which I still view in large part as reducing the time software developers spend on accidental problems. I tell my students that we have made great progress; we are probably near the 50%/50% level, but still far from a mature level, which I estimate to be about 10%/90%.

As my interest in software testing grew, I sometimes wondered whether testing is solving an accidental or essential problem. Testing is often treated as accidental in industry, where it is sometimes barely done at all, often done poorly, and seldom done well. Testing is often cut when schedules and budgets overrun, which sounds like testing is inessential. On the other hand, processes such as agile development, with its emphasis on test-driven design, put testing front and center, making it sound essential. Even Microsoft claims an increased emphasis on testing. In a recent interview, Bill Gates said that 50% of the people at Microsoft are testers, and the programmers spend 50% of their time testing, thus Microsoft is more of a testing than a development organization [4].

Regardless of the fundamental question, testers spend much of their time solving accidental problems. I believe the essential problem within testing is that of designing good input values. When testers test software by wriggling a mouse and banging on a keyboard, a lot of accidental problems are being solved but with very little test design. Test automation, test execution, evaluating the results of tests, and managing testers are all accidental problems and occupy much more than half of testers time.

In my first year of graduate school, I learned about the "edit-compile-debug" cycle, and asked my professor "what about testing? Don't we need to test before we debug?" I still do not understand how we can teach students and expect programmers to debug without testing.

As a programmer, I always felt we should use an "edit-compile-test-debug" cycle. But, all I really wanted to do was RUN my program! I didn't really care about compiling, let alone testing and debugging. They were accidental things I was forced to spend time on because my technologies were primitive, languages poor, and my concentration imperfect. I still believe that if we want testing to help us create better, more reliable and secure software, we must integrate testing with the compiler or IDE. Once the program (or class or package) compiles cleanly, the IDE should generate tests, run the tests, and report the results of running those tests. Why should semantic errors be treated so differently from compile errors?

Yet when we look at tools available to industry, there are two huge gaps. One is between the tools and this ideal view of how testing should work. The second is between the research concepts and prototypes and the tools in industry. The most widely used testing tool in industry is probably JUnit [1], a fine tool that I use in my classes. Yet most of its functionality was captured in a small piece of the Mothra system I built in the 1980s [3], and it does less than the test drivers I assigned as class projects in the 1990s. Where are the automatic test data generators, based on formal test criteria? Where are the easy-to-use, process-integrated, measurement tools that tell us how thorough our testing was? Why, in 2009, do developers spend 50% or more of their time on essential problems, yet testers are still in the dark ages, spending 70%, 80%, or more on accidental problems?

[1] Erich Beck and Kent Gamma, Test Infected: Programmers Love Writing Tests, Java Report, 3(7):37-50, July 1998

[2] F. B. Brooks, No Silver Bullet: Essence and Accidents of Software Engineering, IEEE Computer, 20(4):10-19, April 1987

[3] Rich DeMillo, Dany Guindi, Kim King, Mike M. McCracken, and Jeff Offutt, An Extended Overview of the Mothra Software Testing Environment, Second Workshop on Software Testing, Verification, and Analysis, pages 142-151, Banff, Canada, July 1988.

[4] John Foley and Chris Murphy, Q&A: Bill Gates On Trustworthy Computing, Information Week, May 2002, http://www.informationweek.com/story/IWK20020517S0011

Jeff Offutt
offutt@gmu.edu
27 January 2009