SBST-Benchmark Framework
The SBST-Benchmark Framework is a tool to benchmark a testing tool for Java classes.
The typical use case is if you are an author of tool X; this tool is a tool
to automatically generate Junit test-cases to test Java classes. You can use
the SBST-Benchmark Framework to see how good your tool X is.
The SBST-Benchmark Framework uses code coverage and mutation tests to score your tool.
To Build
- Goto /benchmarktool.
- ant jar, grab the generated jar.
Requirements
Linux/unix. Java-7.
Jars: jacoco, pittest, junit, ASM, a bunch of apache libraries. They all were
included in the The SBST-Benchmark Framework's distribution. See benchmarktool/libs.
Notes:
- Java-8 does not work in combination with the version of pitest that we use.
- The Framework was originally written for Unix/linux. I have tried to adjust
it to work on Windows, but to no avail :( Simply deploying the Framework on
Cygwin won't solve the problem.
To Use
Before you can use the SBST-Benchmark Framework you need to implement your side
of the benchmarking protocol, to allow the Framework to control your tool X.
This protocol, and how to use the Framework is described here.
History
The SBST-Benchmark Framework was developed to facilitate the Testing Tools
Competition at 6th International Workshop on Search-Based Software Testing
(SBST) in Luxembourgh, 2013. The competition is restricted to tools that generate tests for
Java unit classes. Four tools were entered into in the competition: DSC, Randoop, T2,
and Evosuite.
The competition was won by Evosuite (and convincingly proving
SBST's thesis on the superiority of the search-based approach :). The
competition was repeated in 2014, at the 1st International Workshop on
Future Internet Testing (FITTEST) in Istanbul. Randoop, T3 (suscessor of T2),
and Evosuite were in the competition. Again, Evosuite was victorious.
Credits
These are the authors of the SBST-Benchmark Framework:
Arthur Baars,
Sebastian Bauersfeld,
Kiran Lakhotia,
Tanja E.J. Vos,
Simon Poulding,
Nelly Condori.