Research Project Descriptions

January 2002

Dr. Jeff Offutt
Professor
Software Engineering
George Mason University
Fairfax, VA 22030-4444
offutt(at)gmu.edu


I. Testing of Web-based Software Applications

January, 2002

This NASA and NIST supported project is attempting to develop new ways to test software that powers web applications. Web software systems are built using heterogeneous and very loosely coupled software components. They typically interact by passing messages that exchange data and activity state information. These heterogeneous message transfers are usually structured using the eXtensible Markup Language (XML), which allows a flexible common data exchange. This research project is attempting to find ways to validate the reliability of data interactions among web-based software system components, particularly among software components that communicate using XML messages.

Two general approaches are being explored at the present time. One is based on the idea of "information flows". A significant advantage of web-based software is that it allows data to be transferred among completely different types of software components that reside and execute on different computers. For example, a data item input by a user through a web browser may be transferred through an HTML form to a Java Script for syntax validation, then passed across the internet to a web server, which forms it into a parameter that is given to a Java Servlet. The servlet may do further checking, and then encode the data item in a JavaBean, which may then be serialized and stored to disk either in a flat file or in a commercial database. This data item may also trigger the return of more data from the database, which results in further processing by a Java Servlet. This data may then be formulated as part of an HTML page, and delivered through the web server across the network to the web browser and finally presented to the user.

Since the data is transformed in form and content during processing, the more generic word "information" is used. When multiple programming languages are used and the business application software gets complicated, the flows of information through the various pieces of the web software get extremely complicated. When combined with the abilities to keep information persistent through user sessions, persistent across sessions, and shared among sessions, the types of potential software faults that can be introduced are enormous. The concept of Information Flow is an extension of the traditional data flow to loosely coupled web-based software. The information flows and control couplings are used to define system information flow paths through the web application, and criteria are being designed to test the applications.

At a lower level of abstraction, XML messages are being used as a basis for testing peer-to-peer data interactions between software components. An Interaction Specification Model (ISM) used to describe peer-to-peer software interactions. The ISM consists of XML Schema, messaging specifications, and a set of constraints. Test cases are XML messages that are passed between the web software components. Classes of interaction-specific mutation operators are introduced and applied to the ISM to generate mutant interactions and test cases.

Related courses that I teach are SWE 642, SWE 432, and IT 821.

REFERENCES
[1]Jeff Offutt. Quality Attributes of Web Software Applications. IEEE Software: Special Issue on Software Engineering of Internet Software, pages 25-32, March/April 2002.
[2]Suet Chun Lee and Jeff Offutt. Generating Test Cases for XML-based Web Component Interactions Using Mutation Analysis. The Twelfth IEEE International Symposium on Software Reliability Engineering (ISSRE '01), pages 200-209, Hong Kong, PRC, November 2001.


II. Analysis and Testing of Object-oriented Software

January, 2002

This project is an outgrowth of integration testing based on software couplings (project description II below). Object-oriented software defines abstractions that have both state and behavior. This emphasis causes a shift in focus from software units to connections among software components, which affects many areas of software research. Software testing now needs less emphasis on unit testing and more on integration testing. The compositional relationships of inheritance, aggregation, polymorphism, and dynamic binding introduce new kinds of integration faults.

This research project has the general goal of improving the quality of OO software. After analyzing problems that can result from using OO language features, we developed a modeling tool called the "yo-yo graph". This led to new testing criteria that test for problems with inheritance and polymorphism, based on a quasi-interprocedural data flow analysis. We have developed a new fault model for OO software, which defines a number of specific types of OO faults. These faults have been used for empirical validation of the OO testing criteria, and more recently to develop mutation operators for Java.

REFERENCES
[1] Roger T. Alexander, Jeff Offutt and James M. Bieman. Syntactic Fault Patterns in OO Programs, To appear, 2002 International Conference on Engineering of Complex Computer Software, Greenbelt, MD, November 2002.
[2] Yu-Seung Ma, Yong-Rae Kwon and Jeff Offutt. Inter-Class Mutation Operators for Java, To appear, 2002 International Symposium on Software Reliability Engineering, Annapolis, MD, November 2002.
[3] Roger T. Alexander, Jeff Offutt and James M. Bieman. Fault Detection Capabilities of Coupling-based OO Testing, To appear, 2002 International Symposium on Software Reliability Engineering, Annapolis, MD, November 2002.
[4] Jeff Offutt, Roger Alexander, Ye Wu, Quansheng Xiao, and Chuck Hutchinson. A Fault Model for Subtype Inheritance and Polymorphism. The Twelfth IEEE International Symposium on Software Reliability Engineering (ISSRE '01), pages 84-95, Hong Kong, PRC, November 2001.
[5] Roger T. Alexander and Jeff Offutt. Criteria for Testing Polymorphic Relationships. The Eleventh IEEE International Symposium on Software Reliability Engineering (ISSRE '00), pages 15-23, San Jose, CA, October 2000.


III. Repeated Maintenance of Open-Source Software

September, 2002

Online demo

This NSF-funded project is attempting to understand the quality of open-source software. The development of free ("open-source") software is on the rise, but there are scant data on how effective it is compared to commercial software. The purpose of the research is to provide an insight into software engineering aspects of open-source software. This project is attempting to help evaluate the effectiveness of the development of open-source software. The maintainability of two open-source software products, Linux and GCC, is investigated. Currently some 10 million individuals all over the world have copies of Linux, an open-source operating system. GCC is a set of open-source compilers. Successive versions of Linux and GCC are examined to analyze the change in coupling from version to version of each product. The coupling between two units of a software product, a measure of the degree of interaction between those units, is used as a measure of maintainability. Tools are built to compute these changes in coupling, and the output from these tools is then subjected to statistical analysis. A major objective of the research is to confirm or refute that the most important factor in software maintenance is the skill of the individual software engineer. This result could have a significant impact on the education and training of software engineers.

REFERENCES
[1]Mahmoud Elish and Jeff Offutt. The Adherence of Open Source Java Programmers to Standard Coding Practices. To appear, The 6th IASTED International Conference Software Engineering and Applications. Cambridge, MA, November 2002.
[2]Lisa Ferrett and Jeff Offutt. An Empirical Comparison of Modularity of Procedural and Object-Oriented Software. To appear, 2002 International Conference on Engineering of Complex Computer Software. Greenbelt, MD, November 2002.
[3]Stephen R. Schach, Bo Jin, David R. Wright, Gillian Z. Heller, A. Jefferson Offutt. Dependencies Within the Linux Kernel. The ACM Mid-Southeast Chapter Fall Conference, Gatlinburg, TN, November 2001.
[4]Stephen R. Schach and A. Jefferson Offutt. On the Nonmaintainability of Open-Source Software. The 2nd Workshop on Open-Source Software Engineering (OSSE 2002), http://opensource.ucc.ie/icse2002/, Orlando, FL, May 2002. (Refereed)
[5]Steve Schach, Bo Jin, David Wright, Gillian Z. Heller, and Jeff Offutt. Maintainability of the Linux Kernel. IEE Proceedings Journal: Special Issue on Open Source Software Engineering, 2002.


IV. Coupling-based Analysis

Coupling-based Analysis Techniques
for Integration and Regression Testing and Maintenance
of Object-oriented Software

January, 1998

This NSF-sponsored research program [6] is attempting to develop new ways to analyze and test the integration aspects of software components, particularly for object-oriented software. As more and more software is being developed using object-based designs and object-oriented languages, the problems of integrating the components are becoming more crucial to the success of the software. In traditional software, developers devoted more of the validation and verification efforts on software units (procedures and functions), because that is where a large amount of the complexity was encoded. With object-based approaches, the trend is for software units (functions, methods, etc.) to become smaller and less complicated; the complex solutions to our problems are being encoded more and more in our data structures and in the integration connections among our units and modules (classes, packages, etc.). For industry, this poses major problems, because our efforts and our processes are primarily centered around unit and system validation and verification; we are not accustomed to spending many resources on integration. For researchers, this presents an opportunity, because there is a distinct lack of knowledge of how to analyze, test, validate, and verify integration components.

This research program is attempting to address these issues. A basic theme is that many of the integration problems can be addressed through software "couplings", which are the paths through which software components communicate. Coupling was introduced as a software measurement technique in the 1960s, and reducing the amount of coupling was a primary motivator for the some of original data abstraction and object-oriented research. This program currently has four directions.

  1. We are developing practical, effective, formalizable, automatable techniques for testing connections between components during software integration [1,4,5]. This technique can be used to support integration testing of software components, and satisfies part of the FAA's requirements for structural coverage analysis of software.
  2. We are extending and refining the original coupling metric definitions to handle language features of modern languages [2]. This includes handling couplings based on such features as data abstraction, information hiding, type abstraction, inheritance, and polymorphism. We are refining the measurements to be precise, and developing complete algorithms for measuring complexity based on couplings.
  3. "Change impact analysis" refers to the process of determining how much a proposed change to a software system will affect the rest of the system. This is used for planning, testing, and decision making. We are using couplings to compute the possible impacts of changes, particularly through inheritance hierarchies [3].
  4. Finally, couplings are being used to determine the amount of regression testing that needs to be done after a maintenance change is made.

REFERENCES
[1] Jeff Offutt and Zhenyi Jin. Coupling-based Criteria for Integration Testing. The Journal of Software Testing, Verification, and Reliability, 8(3):133-154, September 1998.
[2] Jeff Offutt, Aynur Abdurazik and Roger T. Alexander. An Analysis Tool for Coupling-based Integration Testing. The Sixth IEEE International Conference on Engineering of Complex Computer Systems (ICECCS '00), pages 172-178, Tokyo, Japan, September 2000.
[3] Zhenyi Jin and Jeff Offutt. Deriving Tests From Software Architectures. The Twelfth IEEE International Symposium on Software Reliability Engineering (ISSRE '01), pages 308-313, Hong Kong, PRC, November 2001.
[4] Michelle Lee, Jeff Offutt and Roger T. Alexander. Algorithmic Analysis of the Impacts of Changes to Object-oriented Software. 34th International Conference on Technology of Object-Oriented Languages and Systems (TOOLS USA '00), pages 61-70, Santa Barbara, CA, August 2000.
[5] Jeff Offutt, M. J. Harrold, and P. Kolte. A Software Metric System for Module Coupling. The Journal of Systems and Software, 20(3):295-308, March 1993.
[6] Li Li and Jeff Offutt. Algorithmic Analysis of the Impact of Changes to Object-Oriented Software. 1996 International Conference on Software Maintenance, pages 171-184, Monterey, CA, November 1996.
[7] Zhenyi Jin and Jeff Offutt. Coupling-based Integration Testing. Second IEEE International Conference on Engineering of Complex Computer Systems, pages 10-17, Montreal, Canada, October 1996. (Outstanding Paper Award)
[8] Zhenyi Jin and Jeff Offutt. Integration Testing Based on Software Couplings. Tenth Annual Conference on Computer Assurance (COMPASS-95), pages 13-23, Gaithersburg, Maryland, June 1995.
[9]Jeff Offutt. Annual Report to National Science Foundation, 1999.


V. Specification-based Test Generation

Generating Test Data From Functional Specifications

January, 1998

The intent of this research project is to improve our ability to test software that needs to be highly reliable by developing formal techniques for generating test data from formal specificational descriptions of the software. Formal specifications represent a significant opportunity for testing because they precisely describe what functions the software is supposed to provide in a form that can be easily manipulated by automated means.

This research is developing a general model for developing test inputs from model-based specifications. It is currently funded by a grant from the Japanese government (through Hiroshima City University) and from Rockwell-Collins Avionics. The technique generates tests as multi-part artifacts, using a multi-step, multi-level process. The multi-part aspect means that a test case is actually composed of several components; input values, expected outputs, inputs necessary to get to the appropriate state, and inputs necessary to observe the effect of the test case. The multi-step aspect means that tests are generated from the functional specifications by a stepwise refinement process. The functional specifications are refined into a specification graph, which is refined into test requirements, which is then refined into test specifications, and finally into ready-to-run test scripts. The test specifications are based on a preliminary test specification language that incorporates inputs necessary to reach a state where testing should start (test prefixes), and expected outputs. The multi-level aspect means that tests are generated for testing the software at several levels of abstraction. Both the multi-part and the multi-level aspects make it easy to separate the functional specifications of the system from the input specifications. If the input specifications change, that should affect the test case in only very small ways.

Previous investigations into this topic have led to preliminary techniques for generating tests from Z specifications [3], from SOFL specifications [1,2], and from SCR specifications [4]. The project is currently building on this basis in several ways. The techniques are being expanded to generalize to other model-based specification languages. This will include the definition of a test specification language that is general enough to apply to any model-based specification language. It is also proposed to develop a general model for construction of test case prefixes, which encode inputs necessary to put the system into the necessary state. We are also planning to construct a proof-of-concept tool to automate as much of the model and technique as possible. Test cases will be generated by creating test requirements, which define what inputs are needed in the form of partial truth tables defined on transition predicates, state transition predicates, and pairs of state transition predicates. Given a formal specification, most if not all of these test requirements can be generated automatically. The prefix of a test case includes inputs necessary to put the system into a particular state. Given a specification graph, many of these prefixes can be generated automatically. One area of investigation is whether this problem is generally solvable (unlike the related reachability problem in general software, which is generally unsolvable), and how to solve or partially solve the problem. Finally, algorithms for automatically generating test scripts from test specifications will be investigated.

REFERENCES
[1] Jeff Offutt and Shaoying Liu. Generating Test Data from SOFL Specifications. The Journal of Systems and Software, 49(1):49-62, December 1999.
[2] Shaoying Liu, Jeff Offutt, Chris Ho-Stuart, Yong Sun, and Mitsuru Ohba. SOFL : A Formal Engineering Methodology for Industrial Applications IEEE Transactions on Software Engineering, Special Issue on Formal Methods,24(1):337-344, January 1998.
[3] Aynur Abdurazik, Paul Ammann, Wei Ding and Jeff Offutt. Evaluation of Three Specification-based Testing Criteria. The Sixth IEEE International Conference on Engineering of Complex Computer Systems (ICECCS '00), pages 179-187, Tokyo, Japan, September 2000.
[4] Aynur Abdurazik and Jeff Offutt. Using UML Collaboration Diagrams for Static Checking and Test Generation. Third International Conference on the Unified Modeling Language (UML '00), pages 383-395, York, England, October 2000.
[5] Jeff Offutt} and Aynur Abdurazik. Generating Tests from UML Specifications. Second International Conference on the Unified Modeling Language (UML '99), pages 416-429, Fort Collins, CO, October 1999.
[6] Shaoying Liu, Jeff Offutt, Mitsuru Ohba, and Keijiro Araki. The SOFL Approach: An Improved Principle for Requirements Analysis. Transactions of Information Processing Society of Japan, 39(6):1973-1989, June 1998.
[7] Paul Ammann and Jeff Offutt. Using Formal Methods To Derive Test Frames in Category-Partition Testing. Ninth Annual Conference on Computer Assurance (COMPASS 94), pages 69-80, Gaithersburg, Maryland, June 1994.
[8] Rockwell-Collins technical reports., Phase I, II, III, IV.


Back to my home page.