1.1. Trends in Software Model Checking

The development of modern software systems involves a rich collection of software artifacts. Artifacts may represent aspects of the process by which software is developed or the various products that are outcomes of stages of process phases. Disciplined approaches to software development will apply criteria to determine essential properties of such artifacts, for example, a requirements model should be checked for completeness and consistency prior to commencement of system design. For large complex software systems, such criteria should be formalized and their checks automated. While reasoning about structural properties of artifacts is efficient and can be useful (e.g., checking interface signatures), behavioral reasoning is the ultimate goal.

The potential of model checking technology for: (a) detecting coding flaws that are hard to detect using existing quality assurance methods (e.g., such as bugs that arise from unanticipated interleavings in concurrent programs), and (b) verifying that system models and implementations satisfy crucial temporal properties and other light-weight specifications has led a number of international corporations and government research labs such as Microsoft,IBM, Lucent, NEC, NASA, and Jet Propulsion Laboratories (JPL) to fund their own software model checking projects.

The effectiveness of the software model checking applications cited above has in most cases relied on the fact that the researchers carrying out the studies had a detailed knowledge of the model checking tool being applied and understood very clearly (perhaps as the result of extensive experimentation) how to (a) map the problem being modeled to the tool in a manner that would draw on the strengths of the tool, and (b) how to configure the tool for optimal performance. In some cases, researchers were not satisfied with the results of mapping their problem to an available configuration of an existing tool, and they found it necessary to study an existing model checking framework in detail in order to customize it. In other cases, researchers had the insight that they needed to develop a new framework targeted to the semantics of a family of artifacts (e.g., JPF for Java programs) or a specific domain (e.g., SLAM for device drivers).

We believe that a number of trends both in the requirements of computing systems and in the processes by which they are developed that will drive persons and organizations interested in applying model checking technology to rely increasingly on customization/adaptation of existing tool frameworks or construction of new model checking tools.

Thus, there is a need for model checking tools that support customization, extensibility, and that are easily embedded or encapsulated in larger development tools. Existing model checkers, however, including widely used tools such as SPIN , FDR2 , and NuSMV , are designed to support a fixed input language using a fixed collection of state-space representations, reduction and exploration algorithms. The capabilities of these tools have evolved over time, but that evolution has been limited to the capabilities that the tool developer themselves found useful or desirable. Moreover, most existing tools do not provide direct support for features found in object-oriented software systems. Recent versions of the SPIN model checker allow one to include C source directly in the Promela modeling language. While this does allow for a degree of extensibility, one might benefit more from comprehensive mechanisms for extensibility that are part of the overall design of the model checking framework.

While it is possible to continue the practice of cannibalizing and modifying existing model checkers, or building new model checkers from scratch, the level of knowledge and effort required for such activitivies currently prevents many domain experts who are not necessarily experts in model checking from successfully applying model checking to software analysis. Even though experts in different areas of software engineering have significant domain knowledge about the semantic primitives and properties of families of artifacts that could be brought to bear to produce cost-effective semantic reasoning via model checking, in order to make effective use of model checking technology these experts should not be required to build their own model checker or to pour over the details of an existing model checker implementation while carrying out substantial modifications.