Tutorial
LEAP now supports the verification of parametrized liveness properties. This tutorial will soon be updated with information regarding how to perform this kind of verification. Meanwhile, examples can be found in the "Examples" section.
Here we describe some basic notions in order to use LEAP.
LEAP parses a program in a C-like pseudo language, generating a transition system from the parsed information. In case of concurrent programs, a parametrized transition system is generated. Once the transition system is generated, LEAP tries to verify as valid a bunch of invariant candidates given as argument. To accomplish this, it uses the information provided by a proof graph. A proof graph describes the dependency between the invariant candidates in addition to some strategies and tactics that should be use in each case. Below, we give a brief description about programs, candidates invariants and proof graphs.
Programs
LEAP parses programs in a C-like pseudo language with variable declarations, conditionals, loops and function calls with arguments. However, it does not support recursion so far. A valid program must contain at least a
procedure main ()
int i
begin
i := 2;
call foo(i);
return ();
end
procedure foo (j:int)
int myInt
begin
myInt := j;
return ();
end
Lines starting with
// This is a global variable declaration
global
maxValue
// This is the initial assumption
assume
0 <= maxValue /\ maxValue <= 5
procedure main ()
int i
begin
// This is an atomic section
{
i := maxValue;
i := i + 1;
}
return ();
end
In order to aid the verification, it is possible to add ghost variables and ghost code. Ghost variables are declared using the
procedure main ()
int i := 0
ghost int previous_i
begin
while (i < 10) do
i := i + 1
$
previous_i := i;
$
done
return ();
end
Finally, program lines can be labeled so they can be referred later in an invariant candidate formula. Using a colon (
procedure main ()
int i := 0
ghost int previous_i := 0
begin
:while_section[
while (i < 10) do
:i_increases
i := i + 1
$
previous_i := i;
$
done
:while_section]
return ();
end
Invariant Candidates
Invariant candidates are written as formulas in
vars:
tid t
invariant [bounded_i] :
@while_section. ->
(0 <= main::i(t) /\
main::i(t) <= 10 /\
0 <= main::previous_i(t) /\
main::previous_i(t) <= 9 )
The example above declares an invariant candidate named
The
Proof Graphs
Proof graphs represent the proofs themselves. Basically, they describe the invariant candidates to be proved, in addition with all required supporting formulas, strategies and tactics. The most simple proof graph is the one that establishes that an invariant candidate does not require any kind of support:
=> bounded_i
In this case, we are just saying that the previously defined invariant candidate
If an invariant candidate is not inductive by itself, extra formulas to act as support can be added before the arrow:
support_1, support_2, support_3 => bounded_i
In general, support is required only for some offending transitions and not for all transitions belonging to the system. The process of adding support to transitions that do not require them just leads to an unnecessary leak of efficiency. Because of this, LEAP allows to explicitly denote which transition requires which support:
=> bounded_i [
1: S: support_1;
15: O: support_2;
27: S: support_1, support_3
]
This example says that
Some decision procedures are based on a finite-model property. For such decision procedures it is possible to specify the strategy to be used when computing the bounds of the finite-model property. Some of the available strategies are:
DNF : this strategy first computes the disjunctive normal form of the formula and then computes the bounds over the resulting conjunctions. This is the most precise strategy, but in some cases it may be inefficient due to the necessity of computing the DNF.Pruning : as only some literals are interesting when computing the bounds, this strategy first prunes the formula keeping only potentially interesting literals on it. Then, it proceeds to computes the DNF over the resulting formula. This strategy is faster than DNF, but it still suffers from the efficiency problems related to the computation of the DNF.Union : this strategy takes into consideration only on the literals appearing in the formula, and not the structure of the formula (asDNF andPruning does). It is faster than the previous strategies, but it may come up with a not so tight bound.
Besides the finite-model strategy, it is possible to specify tactics for simplifying the formula before it is passed to the SMT solver. There exists four classes of tactics:
Support Split Tactics : These tactics receive a verification condition and split it into new verification conditions:split-goal : if the goal of a verification condition consists on a big conjunction, then this tactic replaces the original VC by a new set of VCs. These new VCs are identically to the original VC, except that their goal has been replaced by each component of the conjunction.
Support Tactics : These tactics specify how the support of a formula should be computed. They receive a verification condition and they return a formula. The tactics in this group are:full : it generates a full support, considering all partial assignments from the vocabulary of the verification condition to the vocabulary of the support..reduce : it generates a reduced support, considering only full assignments from the vocabulary of the verification condition to the vocabulary of the support.reduce2 : it is similar toreduce , except that it also removes duplicates from the generated support.
Formula Split Tactics : these tactics are similar to support split tactics, except that they operate over formulas. A formula at this level is considered as an implication with an antecedent and a consequent. Some of these tactics are:split-consequent : similar tosplit-goal .split-antecedent-pc : if the program counter is restricted in the form of a disjunction in the antecedent, then this tactic generates new formulas forcing all possible cases of such disjunction.
Formula Tactics : these tactics simplify formulas by converting them into new ones. In this category we find the following tactics:simplify-pc : propagates the program counter information, resulting on a simpler formula with no information regarding program counters at all.propositional-propagate : propagates literals through the formula.filter-strict : eliminates from the antecedent all literals without variables in common with the goal.propagate-disj-conseq-fst : if the consequent is an implication with a literal as antecedent, then it propagates the literal to the consequent and adds such literal to the antecedent of the formula.propagate-disj-conseq-snd : if the consequent is an implication with a literal as consequent, then it propagates the negation of the literal to the consequent and add such negated literal to the antecedent of the formula.
Finite-model property strategies as well as tactics can be added to proof graphs as part of the invariant candidates or as part of any special case for particular transitions. The syntax in any case is the same:
{ finite-model_strategy :
support_split_tactic_list |
support_tactic_list |
formula_split_tactic_list |
formula_tactic_list }
For instance, consider the proof graph we used as example before. We can specify finite-model property strategies and tactics this way:
=> bounded_i [
1: S: support_1 {union : spit-goal | | simplify-pc | };
15: O: support_2;
27: S: support_1, support_3
] { pruning :
| reduce2
| split-consequent split-antecedent-pc
|
}
In this case we are saying that finite-model strategy