Software Engineering Experimentation
Nothing ever becomes real till it
is experienced; even a proverb is no
proverb to you till your life has illustrated it.
— John Keats
Eighth Mason / Skövde Workshop on Experimental Software Engineering
Jeff Offutt & Birgitta Lindström
Technical Program Committee:
Software Engineering Experimentation students
The Mason / Skövde Workshop on Experimental Software Engineering
provides a forum for discussing
current experimental studies in the field of computing.
Papers are solicited for the studies listed
in this CFP,
as well as for other studies.
Accepted papers will not be published in any conference proceedings.
Submitted papers must not have been published previously,
but they may be submitted elsewhere in the future.
All submitted papers will be accepted.
Papers should be submitted 1.5 or double-spaced
in a font size no smaller than 11 points,
Papers must not exceed 25 double-spaced pages
including references and figures,
and will not be refereed by external reviewers.
All papers should indicate what is interesting about the presented work.
The first page should include an abstract of maximum 150 words,
a list of keywords,
the author’s name,
and contact information
(email address and phone).
Papers should be single-author.
The citations and references should be formatted in
standard software engineering format,
with bracketed citations (““)
and citation keys that are either numeric or
strings based on the authors’ names (“[Basi91]“).
You will be allowed 25 minutes
for your presentation,
including 5 minutes for questions.
A first draft of each paper
must be submitted before
by posting on the Piazza bulletin board.
Each paper will receive at least three reviews,
one from the program chair and two from technical program committee members.
Reviews will be returned on
and the final paper must be submitted electronically by
Final papers must be submitted in PDF format (not MS Word or Latex!).
The final paper must be single spaced and in 10 point font.
|Milestones || Date|
|Topic selection: ||2 February|
|Experimental design review: ||24 February|
|Draft paper submitted: ||20 April|
|Reviews due: ||27 April|
|Final paper submitted: ||11 May|
|Presentations: ||See schedule|
Don’t mind criticism --
If it is untrue, disregard it,
If it is unfair, don’t let it irritate you,
If it is ignorant, smile,
If it is justified, learn from it.
SUGGESTED TOPICS LIST
Following is a list of suggested topics for your empirical study.
You may choose any topic you wish,
either from this list or something of your own creation.
I specifically encourage you to consider carrying out an experiment
related to your current research.
Many of these suggestions are related to software testing.
This emphatically does not imply a preference in the class,
but just reflects the limits of my creativity.
That is, most of my ideas are about testing problems.
They are also unordered.
You will notice that most of these studies do not involve much if any programming
but some will involve a lot of program execution.
these studies can be done more easily with clever use of shell scripts.
There can be a fair amount of overlap between these studies,
and you may want to share programs, test data sets,
or other artifacts.
Trading of this kind of experimental artifacts is greatly encouraged!
Some of these studies could use a partner to carry out some of the work,
to avoid bias from having one person conduct the entire experiment.
I encourage you to help each other;
please communicate among yourselves if you need help ...
ask and offer.
These descriptions are concise overviews
and most are fairly open-ended, by design,
to encourage more creativity and divergent thinking.
I will be happy to discuss any project in more depth
if you need help refining the suggestion.
Empirical Studies Suggestions
- Quality of JUnit assertions (test oracles)?
With my former PhD student, Nan Li,
we extended the traditional RIP (reachability-infection-propagation) model
to the RIPR (revealability) model.
We noticed that even when tests cause incorrect behavior,
the test oracle sometimes does not observe the incorrect part of the output space,
thus the fault is not revealed.
This brings the question:
How good are the test oracles in automated tests?
Or more specifically,
how often to JUnit assertions fail to reveal incorrect behavior?
- RACC vs. CACC in real life?
Restricted Active Clause Coverage (RACC)
Correlated Active Clause Coverage (CACC)
are test criteria based on logic expressions.
The difference between the definitions of
RACC and CACC
is rather subtle.
Some RACC requirements are infeasible
when the CACC requirements on the same logic predicate
But is this difference significant in real software?
how many predicates in existing software
behave differently under RACC than under CACC?
- How are mutation tests different from human-designed tests?
While researchers have evaluated the quality of human-designed tests
by measuring them against mutation,
nobody has asked whether
human-designed tests tend to miss
particular types of mutants.
Unkilled mutants may reveal types of faults that humans tend to miss.
- Does weak mutation work with minimal mutation?
Ammann, Delamaro, Kurtz, and Offutt recently invented
the mutation subsumption graph,
which allows us to identify the minimal set of mutants needed,
a set that is much smaller than the full set.
Years ago, experiments found that weak mutation,
where results are checked immediately after the mutated statement
rather than the end of execution,
works almost as well as strong mutation.
However, these results may not hold with minimal mutation,
thus a new experiment is needed to validate minimal-weak mutation.
- Covering the model versus covering the program:
If we design and generate tests to cover a model of a program,
a finite state maching or UML diagram,
how well will those tests cover the program on the same coverage criterion?
Note that this study could be done with multiple criteria.
- How much does ROR help MCDC?
In a paper recently published,
Improving Logic-Based Testing,
Kaminski, Ammann, and I showed how to add the ROR mutation operator
to MCDC testing,
resulting in a stronger test set.
But this technique has a cost,
one more test per clause in each predicate in the program.
Empirical studies are needed to determine how much this technique
improves fault detection.
- PIT vs. javalanche vs. muJava:
Several mutation tools are available,
each of which use different collections of mutation operators.
Clearly, these operators will result in different tests,
but how different are they in terms of strength?
The simplest comparison would be a cross-scoring,
where tests are created to kill all mutants for each tool,
then run against all mutants generated by the other tools.
- Java mutation experiments:
One resource we have available is a mutation testing system for Java,
Instructions for downloading, installing, and running muJava are
available on the website.
There are several small experiments you could use muJava to run.
- Test criteria comparison.
For a collection of programs, develop tests that kill all mutants,
and develop tests that satisfy another criterion
(data flow, CACC, edge-pair, input parameter modeling, etc.).
Compare them on the basis of number of tests
and on their fault finding abilities.
- Mutation operator evaluation.
One key to mutation testing is the quality of the mutation operators.
Most of the class-level mutation operators are fairly new,
and it is possible that some are redundant
and others have very little ability to detect faults.
It would help to experimentlly evaluate the operators,
based on their abilities to find faults,
redundancy, or frequency of equivalence.
- Mutation as a fault seeding tool.
One use of mutation is to create faults for other purposes,
to compare other testing techniques.
- Comparing input space partitioning criteria:
Dozens of studies comparing structural, data flow, and mutation test criteria
have been published.
But I have not seen any studies that compared input space partitioning criteria
such as each choice, base choice, pair-wise, and multiple base choice.
- Web modeling and testing evaluation:
I recently published a paper that proposed a method for modeling the presentation layer
of web applications.
This model can be used to generate tests, among other things.
If you have access to a reasonably sized web application,
it would be very interesting to apply the modeling and test method
in the paper to evaluate its effectiveness.
The paper can be downloaded from my
- Metrics comparison:
Researchers have suggested many ways to measure the
complexity and/or quality of software.
These software metrics are difficult to evaluate,
particularly on an analytical basis.
A interesting project would be to take two or more metrics,
measure a number of software systems,
and compare the measurements in an objective way.
The difficult part of this study would be the evaluation method:
How can we compare different software metrics?
To come up with a sensible answer to this question,
start with a deeper question:
What do we want from our metrics?
- Frequency of infeasible subpaths in testing:
Many structural testing criteria exhibit what is called the
feasible path problem,
which says that some of the test requirements are
infeasible in the sense that the semantics of the program imply
that no test case satisfies the test requirements.
unreachable statements in path testing techniques,
and infeasible DU-pairs in data flow testing
are all instances of the feasible path problem.
For example, in branch testing,
one branch might be executed if (X = 0)
and a subsequent branch if (X != 0);
if the test requirements need both branches to be taken during
the same execution,
the requirement is infeasible.
This study would determine,
for several programs,
how many required subpaths by some test criterion
A reference on the subject of the feasible path problem can be
found on my web site:
Automatically Detecting Equivalent Mutants and Infeasible Paths.
- Experiments in coupling:
My student, Aynur Abdurazik, completed her dissertation on
Coupling-Based Analysis of Object-Oriented Software.
In Chapter 10 she suggested several interesting areas for future research,
some of them experimental.
In particulate, sections
10.2.1, Application of Coupling Model to Web Applications,
10.2.2, Coupling-based Fault Analysis,
10.2.3, Comprehensive Empirical Validation of Three Specific Problems,
10.2.6, Coupling-based Reverse Engineering,
10.2.7, Coupling-based Component Ranking
suggest some potentially useful experimental directions.
- Declarative programming:
Many Web frameworks attempt to alleviate the burden of tedious
programming tasks by allowing developers to specify navigation
and page composition logic declaratively in configuration files.
It would be interesting to investigate the effects of this type
of declarative programming on downstream activities such as software testing.