The framework gives access to various tools. Currently it only supports two tools: the {@link Sequenic.T2.Engines.BaseEngine automated testing tool} and the {@link Sequenic.T2.Engines.Replay replay (regression) tool}. In the future we plan to add more.
The units subjected to testing by T2 are Java classes. T2 checks for class invariant as well as the specifications of methods, if they are provided. T2 does not use a special specification language. All specifications are to be written in plain Java.
Given a target class C, a test engine tests C by generating random 'executions'. Each execution is a sequence of steps. It starts by creating an instance of C, which takes the role of the target object. Then, at each step of the execution, the test engine either randomly updates a field of the target object, or randomly calls a method of C. When a method m is called, the engine either passes the target object as the receiver of m, or as a parameter. After each step, the target object will be checked against the class invariant of C, if one is specified. Furthermore, if the step calls a method, internal error and run time exceptions are checked. If the method has a specification, it will be checked as well.
When a violation is found, the execution will be reported. To report an execution we will need to print the state of involved objects at each step of the execution. This means that we need to be able to replay an execution. To be able to do this we maintain a {{@link Sequenic.T2.Seq.Trace meta representation} of the ongoing execution. The important thing about this meta representation is that it allows us to reproduce the corresponding actual execution, exactly as it was. This is important for e.g. regression test.
During an execution the test engine will need to generate objects, e.g. to be passed as parameters when methods are called. Since in real execution objects may be linked to each other, the test engine has to be able to reuse old objects rather than keep generating fresh objects. To facilitate this the test engine maintains an {@link Sequenic.T2.Pool object pool}. Whenever objects are created during an execution, they are put in the pool. When an execution needs an object, the engine can decide to just pick one (of the right type) from the pool rather than creating a fresh one. Each object in the pool also receives a unique integer ID. This ID is very important. When an object from the pool is reused, we remember its ID in the meta representation of the test sequence, so that when the execution has to be reproduced (replayed) we know exactly which objects are, e.g. passed as arguments to a method call.
Whenever a new execution is started, the used pool has to be reset. This makes sure we start from a fresh pool, free from side effect of the previous execution.
The algorithm for 'generating' objects is actually a bit more complicated than above. Suppose T2's test engine has to generate an object of class E; it goes through the following steps:
Because the base domain is always checked first we can use it to limit the range over a class E from which the engine generates objects. For example, if the only intergers in the base domain are -1 and 1, then these will be the only integers the engine generates whevener it needs one. This gives a way to constrain the range of the integers we generate. Alternatively, we can choose to use a base domain that can supply a random integer from the entire range of int values.
Only clonable objects can be put in the base domain. Cloning is necessary to make sure that objects in a base domain are safe from the engine's side effect (in contrast, objects in the pool are not, and should not be, protected from the engine's side effect). The cloning relies on serialization, so we should only put serializable objects in a base domain.
When looking for an instance of E in a base domain the engine will not look for instances from subclasses of E.