test case generation; new software; model checking
In this paper, we show how to generate test cases for a component deployed into a new software environment. This problem is important for software engineers who need to deploy a component into a new environment. Most existing model based testing approaches generate models from high level specifications. This leaves a semantic gap between the high level specification and the actual implementation. Furthermore, the high level specification often needs to be manually translated into a model, which is a time consuming and error prone process. We propose generating the model automatically by abstracting the source code of the component using an under-approximating predicate abstraction scheme and leaving the environment concrete. Test cases are generated by iteratively executing the entire system and storing the border states between the component and the environment. A model checker is used in the component to explore non-deterministic behaviors of the component due to the concurrency or data abstraction. The environment is symbolically simulated to detect refinement conditions. Assuming the run time environment is able to do symbolic execution and that the run time environment has a single unique response to a given input, we prove that our approach can generate test cases that have complete coverage of the component when the proposed algorithm terminates. When the algorithm does not terminate, the abstract-concrete model can be refined iteratively to generate additional test cases. Test cases generated from this abstract-concrete model can be used to check whether a new environment is compatible with the existing component.