IMDEA Software

Iniciativa IMDEA

Inicio > Eventos > Charlas Invitadas > 2015 > Automated test generation for classes with environment dependencies
Esta página aún no ha sido traducida. A continuación se muestra la página en inglés.

Juan P. Galeotti

lunes 12 de enero de 2015

10:45am Meeting room 302 (Mountain View), level 3

Juan P. Galeotti, Post-doctoral Researcher, Saarland University, Germany

Automated test generation for classes with environment dependencies

Abstract:

Automated test generation for object-oriented software typically consists of producing sequences of calls aiming at high code coverage. In practice, the success of this process may be inhibited when classes interact with their environment, such as the file system, network, user-interactions, etc. This leads to two major problems: First, code that depends on the environment can sometimes not be fully covered simply by generating sequences of calls to a class under test, for example when execution of a branch depends on the contents of a file. Second, even if code that is environment- dependent can be covered, the resulting tests may be unstable, i.e., they would pass when first generated, but then may fail when executed in a different environment. For example, tests on classes that make use of the system time may have failing assertions if the tests are executed at a different time than when they were generated.

In this talk, we present an approach that applies bytecode instrumentation to automatically separate code from its environmental dependencies, and extend the EVOSUITE Java test generation tool such that it can explicitly set the state of the environment as part of the sequences of calls it generates. Using a prototype implementation, which handles a wide range of environmental interactions such as the file system, console inputs and many non-deterministicfunctions of the Java virtual machine (JVM), we performed experiments on 100 Java projects randomly selected from SourceForge (the SF100 corpus). The results show significantly improved code coverage – in some cases even in the order of +80%/+90%. Furthermore, our techniques reduce the number of unstable tests by more than 50%.