Nothing ever becomes real till it
is experienced; even a proverb is no
proverb to you till your life has illustrated it.
— John Keats
Syllabus Schedule Papers Project My home page
Last update
23-January, 2015
Software Engineering Experimentation
Project Description
Spring 2015

Eighth Mason / Skövde Workshop on Experimental Software Engineering

Program Chairs: Jeff Offutt & Birgitta Lindström

Technical Program Committee: Software Engineering Experimentation students

The Mason / Skövde Workshop on Experimental Software Engineering provides a forum for discussing current experimental studies in the field of computing. Papers are solicited for the studies listed in this CFP, as well as for other studies.

Accepted papers will not be published in any conference proceedings. Submitted papers must not have been published previously, but they may be submitted elsewhere in the future. All submitted papers will be accepted.

Full-Length Papers: Papers should be submitted 1.5 or double-spaced in a font size no smaller than 11 points, fully justified. Papers must not exceed 25 double-spaced pages including references and figures, and will not be refereed by external reviewers. All papers should indicate what is interesting about the presented work. The first page should include an abstract of maximum 150 words, a list of keywords, the author’s name, affiliation, and contact information (email address and phone). Papers should be single-author. The citations and references should be formatted in standard software engineering format, that is, with bracketed citations (“[1]“) and citation keys that are either numeric or strings based on the authors’ names (“[Basi91]“).

Presentations: You will be allowed 25 minutes for your presentation, including 5 minutes for questions.

Submission Procedure: A first draft of each paper must be submitted before 20 April by posting on the Piazza bulletin board. Each paper will receive at least three reviews, one from the program chair and two from technical program committee members. Reviews will be returned on 27 April, and the final paper must be submitted electronically by 11 May. Final papers must be submitted in PDF format (not MS Word or Latex!). The final paper must be single spaced and in 10 point font.

Milestones Date
Topic selection: 2 February
Experimental design review: 24 February
Draft paper submitted: 20 April
Reviews due: 27 April
Final paper submitted: 11 May
Presentations: See schedule


Topics

Don’t mind criticism --
   If it is untrue, disregard it,
   If it is unfair, don’t let it irritate you,
   If it is ignorant, smile,
   If it is justified, learn from it.
     - Anonymous

SUGGESTED TOPICS LIST

Following is a list of suggested topics for your empirical study. You may choose any topic you wish, either from this list or something of your own creation. I specifically encourage you to consider carrying out an experiment related to your current research. Many of these suggestions are related to software testing. This emphatically does not imply a preference in the class, but just reflects the limits of my creativity. That is, most of my ideas are about testing problems. They are also unordered.

You will notice that most of these studies do not involve much if any programming but some will involve a lot of program execution. Also, these studies can be done more easily with clever use of shell scripts. There can be a fair amount of overlap between these studies, and you may want to share programs, test data sets, or other artifacts. Trading of this kind of experimental artifacts is greatly encouraged!

Some of these studies could use a partner to carry out some of the work, to avoid bias from having one person conduct the entire experiment. I encourage you to help each other; please communicate among yourselves if you need help ... ask and offer.

These descriptions are concise overviews and most are fairly open-ended, by design, to encourage more creativity and divergent thinking. I will be happy to discuss any project in more depth if you need help refining the suggestion.

Empirical Studies Suggestions

  1. Quality of JUnit assertions (test oracles)? With my former PhD student, Nan Li, we extended the traditional RIP (reachability-infection-propagation) model to the RIPR (revealability) model. We noticed that even when tests cause incorrect behavior, the test oracle sometimes does not observe the incorrect part of the output space, thus the fault is not revealed. This brings the question: How good are the test oracles in automated tests? Or more specifically, how often to JUnit assertions fail to reveal incorrect behavior?
  2. RACC vs. CACC in real life? Restricted Active Clause Coverage (RACC) and Correlated Active Clause Coverage (CACC) are test criteria based on logic expressions. The difference between the definitions of RACC and CACC is rather subtle. Some RACC requirements are infeasible when the CACC requirements on the same logic predicate are feasible. But is this difference significant in real software? That is, how many predicates in existing software behave differently under RACC than under CACC?
  3. How are mutation tests different from human-designed tests? While researchers have evaluated the quality of human-designed tests by measuring them against mutation, nobody has asked whether human-designed tests tend to miss particular types of mutants. Unkilled mutants may reveal types of faults that humans tend to miss.
  4. Does weak mutation work with minimal mutation? Ammann, Delamaro, Kurtz, and Offutt recently invented the mutation subsumption graph, which allows us to identify the minimal set of mutants needed, a set that is much smaller than the full set. Years ago, experiments found that weak mutation, where results are checked immediately after the mutated statement rather than the end of execution, works almost as well as strong mutation. However, these results may not hold with minimal mutation, thus a new experiment is needed to validate minimal-weak mutation.
  5. Covering the model versus covering the program: If we design and generate tests to cover a model of a program, for example, a finite state maching or UML diagram, how well will those tests cover the program on the same coverage criterion? Note that this study could be done with multiple criteria.
  6. How much does ROR help MCDC? In a paper recently published, Improving Logic-Based Testing, Kaminski, Ammann, and I showed how to add the ROR mutation operator to MCDC testing, resulting in a stronger test set. But this technique has a cost, one more test per clause in each predicate in the program. Empirical studies are needed to determine how much this technique improves fault detection.
  7. PIT vs. javalanche vs. muJava: Several mutation tools are available, each of which use different collections of mutation operators. Clearly, these operators will result in different tests, but how different are they in terms of strength? The simplest comparison would be a cross-scoring, where tests are created to kill all mutants for each tool, then run against all mutants generated by the other tools.
  8. Java mutation experiments: One resource we have available is a mutation testing system for Java, muJava. Instructions for downloading, installing, and running muJava are available on the website. There are several small experiments you could use muJava to run.
  9. Comparing input space partitioning criteria: Dozens of studies comparing structural, data flow, and mutation test criteria have been published. But I have not seen any studies that compared input space partitioning criteria such as each choice, base choice, pair-wise, and multiple base choice.
  10. Web modeling and testing evaluation: I recently published a paper that proposed a method for modeling the presentation layer of web applications. This model can be used to generate tests, among other things. If you have access to a reasonably sized web application, it would be very interesting to apply the modeling and test method in the paper to evaluate its effectiveness. The paper can be downloaded from my website.
  11. Metrics comparison: Researchers have suggested many ways to measure the complexity and/or quality of software. These software metrics are difficult to evaluate, particularly on an analytical basis. A interesting project would be to take two or more metrics, measure a number of software systems, and compare the measurements in an objective way. The difficult part of this study would be the evaluation method: How can we compare different software metrics? To come up with a sensible answer to this question, start with a deeper question: What do we want from our metrics?
  12. Frequency of infeasible subpaths in testing: Many structural testing criteria exhibit what is called the feasible path problem, which says that some of the test requirements are infeasible in the sense that the semantics of the program imply that no test case satisfies the test requirements. Equivalent mutants, unreachable statements in path testing techniques, and infeasible DU-pairs in data flow testing are all instances of the feasible path problem. For example, in branch testing, one branch might be executed if (X = 0) and a subsequent branch if (X != 0); if the test requirements need both branches to be taken during the same execution, the requirement is infeasible. This study would determine, for several programs, how many required subpaths by some test criterion are infeasible. A reference on the subject of the feasible path problem can be found on my web site: Automatically Detecting Equivalent Mutants and Infeasible Paths.
  13. Experiments in coupling: My student, Aynur Abdurazik, completed her dissertation on Coupling-Based Analysis of Object-Oriented Software. In Chapter 10 she suggested several interesting areas for future research, some of them experimental. In particulate, sections 10.2.1, Application of Coupling Model to Web Applications, 10.2.2, Coupling-based Fault Analysis, 10.2.3, Comprehensive Empirical Validation of Three Specific Problems, 10.2.6, Coupling-based Reverse Engineering, and 10.2.7, Coupling-based Component Ranking suggest some potentially useful experimental directions.
  14. Declarative programming: Many Web frameworks attempt to alleviate the burden of tedious programming tasks by allowing developers to specify navigation and page composition logic declaratively in configuration files. It would be interesting to investigate the effects of this type of declarative programming on downstream activities such as software testing.