Tutorial

LEAP now supports the verification of parametrized liveness properties. This tutorial will soon be updated with information regarding how to perform this kind of verification. Meanwhile, examples can be found in the "Examples" section.

Here we describe some basic notions in order to use LEAP.

LEAP parses a program in a C-like pseudo language, generating a transition system from the parsed information. In case of concurrent programs, a parametrized transition system is generated. Once the transition system is generated, LEAP tries to verify as valid a bunch of invariant candidates given as argument. To accomplish this, it uses the information provided by a proof graph. A proof graph describes the dependency between the invariant candidates in addition to some strategies and tactics that should be use in each case. Below, we give a brief description about programs, candidates invariants and proof graphs.

Programs

LEAP parses programs in a C-like pseudo language with variable declarations, conditionals, loops and function calls with arguments. However, it does not support recursion so far. A valid program must contain at least a main function, which indicates LEAP the starting point of execution:

procedure main () int i begin i := 2; call foo(i); return (); end procedure foo (j:int) int myInt begin myInt := j; return (); end

Lines starting with // are considered as comments. Global variables and initial assumptions can be specified at the beginning of the program file. Braces are used to denote a section that is executed atomically:

// This is a global variable declaration global maxValue // This is the initial assumption assume 0 <= maxValue /\ maxValue <= 5 procedure main () int i begin // This is an atomic section { i := maxValue; i := i + 1; } return (); end

In order to aid the verification, it is possible to add ghost variables and ghost code. Ghost variables are declared using the ghost keyword. Ghost code is written between $. As ghost code does not belong to the original program, ghost code declarations need to be appended to a non ghost program statement:

procedure main () int i := 0 ghost int previous_i begin while (i < 10) do i := i + 1 $ previous_i := i; $ done return (); end

Finally, program lines can be labeled so they can be referred later in an invariant candidate formula. Using a colon (:) it is possible to label a single line. A whole section composed by many lines can be labeled adding [ and ] to the label name:

procedure main () int i := 0 ghost int previous_i := 0 begin :while_section[ while (i < 10) do :i_increases i := i + 1 $ previous_i := i; $ done :while_section] return (); end

Invariant Candidates

Invariant candidates are written as formulas in .inv files. They can be parametrized by variables:

vars: tid t invariant [bounded_i] : @while_section. -> (0 <= main::i(t) /\ main::i(t) <= 10 /\ 0 <= main::previous_i(t) /\ main::previous_i(t) <= 9 )

The example above declares an invariant candidate named bounded_i. This name is important as it will be used later to identify the invariant candidate in the proof graph. In this case, the declared invariant is parametrized by a thread identifier t. This thread parameter needs to be used when denoting local variables in the formula. For examlple, main::i(t) can be read as the local variable i belonging to procedure main parametrized by thread identifier t.

The @ symbol describes a program location. To denote a location it is possible to simply use the number of the actual program line (as for instance @42.), or a previously defined label (as @while_section. in the example above).

Proof Graphs

Proof graphs represent the proofs themselves. Basically, they describe the invariant candidates to be proved, in addition with all required supporting formulas, strategies and tactics. The most simple proof graph is the one that establishes that an invariant candidate does not require any kind of support:

=> bounded_i

In this case, we are just saying that the previously defined invariant candidate bounded_i is inductive by itself. The double arrow => says that it is a parametrized invariant candidate, and thus the proof rules for concurrent systems should be use. In case of sequential invariant candidates, a simple arrow -> is used.

If an invariant candidate is not inductive by itself, extra formulas to act as support can be added before the arrow:

support_1, support_2, support_3 => bounded_i

In general, support is required only for some offending transitions and not for all transitions belonging to the system. The process of adding support to transitions that do not require them just leads to an unnecessary leak of efficiency. Because of this, LEAP allows to explicitly denote which transition requires which support:

=> bounded_i [ 1: S: support_1; 15: O: support_2; 27: S: support_1, support_3 ]

This example says that bounded_i is proved invariant by adding support support_1 to transition 1 and 27; support_2 to transition 15 and support_3 to transition 27. The S and O denote whether the support should be added to self-conseqution or others-conseqution according to the concurrent proof rules of P-INV and SP-INV. When dealing with sequential systems, the S and O are not required.

Some decision procedures are based on a finite-model property. For such decision procedures it is possible to specify the strategy to be used when computing the bounds of the finite-model property. Some of the available strategies are:

  • DNF: this strategy first computes the disjunctive normal form of the formula and then computes the bounds over the resulting conjunctions. This is the most precise strategy, but in some cases it may be inefficient due to the necessity of computing the DNF.
  • Pruning: as only some literals are interesting when computing the bounds, this strategy first prunes the formula keeping only potentially interesting literals on it. Then, it proceeds to computes the DNF over the resulting formula. This strategy is faster than DNF, but it still suffers from the efficiency problems related to the computation of the DNF.
  • Union: this strategy takes into consideration only on the literals appearing in the formula, and not the structure of the formula (as DNF and Pruning does). It is faster than the previous strategies, but it may come up with a not so tight bound.

Besides the finite-model strategy, it is possible to specify tactics for simplifying the formula before it is passed to the SMT solver. There exists four classes of tactics:

  • Support Split Tactics: These tactics receive a verification condition and split it into new verification conditions:
    • split-goal: if the goal of a verification condition consists on a big conjunction, then this tactic replaces the original VC by a new set of VCs. These new VCs are identically to the original VC, except that their goal has been replaced by each component of the conjunction.
  • Support Tactics: These tactics specify how the support of a formula should be computed. They receive a verification condition and they return a formula. The tactics in this group are:
    • full: it generates a full support, considering all partial assignments from the vocabulary of the verification condition to the vocabulary of the support..
    • reduce: it generates a reduced support, considering only full assignments from the vocabulary of the verification condition to the vocabulary of the support.
    • reduce2: it is similar to reduce, except that it also removes duplicates from the generated support.
  • Formula Split Tactics: these tactics are similar to support split tactics, except that they operate over formulas. A formula at this level is considered as an implication with an antecedent and a consequent. Some of these tactics are:
    • split-consequent: similar to split-goal.
    • split-antecedent-pc: if the program counter is restricted in the form of a disjunction in the antecedent, then this tactic generates new formulas forcing all possible cases of such disjunction.
  • Formula Tactics: these tactics simplify formulas by converting them into new ones. In this category we find the following tactics:
    • simplify-pc: propagates the program counter information, resulting on a simpler formula with no information regarding program counters at all.
    • propositional-propagate: propagates literals through the formula.
    • filter-strict: eliminates from the antecedent all literals without variables in common with the goal.
    • propagate-disj-conseq-fst: if the consequent is an implication with a literal as antecedent, then it propagates the literal to the consequent and adds such literal to the antecedent of the formula.
    • propagate-disj-conseq-snd: if the consequent is an implication with a literal as consequent, then it propagates the negation of the literal to the consequent and add such negated literal to the antecedent of the formula.

Finite-model property strategies as well as tactics can be added to proof graphs as part of the invariant candidates or as part of any special case for particular transitions. The syntax in any case is the same:

{ finite-model_strategy : support_split_tactic_list | support_tactic_list | formula_split_tactic_list | formula_tactic_list }

For instance, consider the proof graph we used as example before. We can specify finite-model property strategies and tactics this way:

=> bounded_i [ 1: S: support_1 {union : spit-goal | | simplify-pc | }; 15: O: support_2; 27: S: support_1, support_3 ] { pruning : | reduce2 | split-consequent split-antecedent-pc | }

In this case we are saying that finite-model strategy pruning, support tactic reduce2 and formula split tactics split-consequent and split-antecedent-pc should be applied to all transitions except for transition 1. For transition 1, finite-model strategy union, support split tactic split-goal and formula tactic simplify-pc should be used.