IMDEA Software

IMDEA initiative

Home > Events > - Previous Software Seminar Series

Software Seminar Series (S3) - Spring 2010

Álvaro García

Tuesday, July 6, 2010

Álvaro García, Post-doctoral Researcher, IMDEA Software Institute

The Beta Cube

Abstract:

Pure lambda calculus reduction strategies have been thoroughly studied, as they constitute the foundations of evaluation in many programming languages. Sestoft recopiled and defined several of them as sets of big-step rules, thus clarifying varying and inaccurate definitions in the literature. From Sestoft's work, we present a rule template which can instantiate any of the foremost strategies and some more. Abstracting the parameters of the template, we propose a space of reduction strategies we like to call the Beta Cube. We also formalise a hybridisation operator---informally suggested by Sestoft---which produces new strategies by composing a subsidiary and a base strategy from the cube. This space gives new and interesting insights about the algebraic properties of reduction strategies. In particular, we present and prove the Absorption Theorem, which states that subsidiaries are left-identities of their hybrids. More properties from the cube remain to be explored.


José Miguel Rojas

Tuesday, June 15, 2010

José Miguel Rojas, PhD Student, The CLIP Laboratory, UPM

Compositional CLP-based Test Data Generation for Imperative Languages

Abstract:

Glass-box test data generation (TDG) is the process of automatically generating test input data for a program by considering its internal structure. This is generally accomplished by performing symbolic execution of the program where the contents of variables are expressions rather than concrete values. The main idea in CLP-based TDG is to translate imperative programs into equivalent CLP ones and then rely on the standard evaluation mechanism of CLP to symbolically execute the imperative program. Performing symbolic execution on large programs becomes quickly expensive due to the large number and the size of paths that need to be explored. In this paper, we propose compositional reasoning in CLP-based TDG as a means of scaling up towards handling realistic programs. The idea in compositional TDG is that, when testing a method M which invokes P, the method is able to adapt test cases (given as preconditions on the input and postconditions in the output) available for P to the current context. Importantly, compositional reasoning also gives us a practical solution to handle native code, which may be unavailable or written in a different programming language. Namely, if P is a native method, we can model its behavior by means of test cases and compositional reasoning is able to use them. Both scaling techniques and handling native code are considered main challenges in the fields of symbolic execution and TDG.


Alexander Malkis

Tuesday, June 1, 2010

Alexander Malkis, Post-doctoral Researcher, IMDEA Software Institute

Cartesian Abstraction and Verification of Multithreaded Programs

Abstract:

We consider the refinement of a static analysis method called thread-modular verification. It was an open question whether such a refinement can be done automatically. We present a counterexample-guided abstraction refinement algorithm for thread-modular verification and demonstrate its potential, both theoretically and practically.


Miguel Angel García de Dios

Tuesday, May 25, 2010

Miguel Angel García de Dios, PhD Student, IMDEA Software Institute

SSG: Smart & Secure GUI Builder

Abstract:

We present a development environment for automatically building smart, security-aware GUIs following a model-based approach. Our environment consists of a number of plugins that have been developed using the Eclipse framework and includes three model editors, a model-transformation tool, a model-analyzer tool, and a code generator.


John Gallagher

Tuesday, May 18, 2010

John Gallagher, Researcher, IMDEA Software Institute

Constraint-based abstraction of temporal logic

Abstract:

We apply the framework of abstract interpretation to derive an abstract semantic function for the modal μ-calculus. The abstract semantic function is constructed directly from the standard concrete semantics together with a Galois connection between the concrete state-space and an abstract domain. Thus there is no need for mixed or modal transition systems to abstract arbitrary temporal properties, as in previous work in the area of abstract model checking. Then we show that this leads directly to an effective implementation of an abstract model checking algorithm for CTL using abstract domains based on linear constraints. The implementation of the abstract semantic function makes use of an SMT solver. We describe an implemented system for proving properties of linear hybrid automata and give some experimental results.


Pedro López-Garcia

Tuesday, May 11, 2010

Pedro López-Garcia, Researcher, The CLIP Laboratory, UPM

Non-failure analysis for (logic) programs

Abstract:

Non-failure analysis aims at inferring that procedure calls in a program will never fail. This type of information has many applications in logic/functional programming, such as error detection, program transformations and parallel execution optimization, avoiding speculative parallelism and estimating lower bounds on the computational costs of goals, which can be used for resource usage analysis, granularity control, abstract carrying code, etc. We provide a method that uses mode and (upper approximation) type information to detect calls and procedures in a program that can be guaranteed to produce at least one solution or not terminate. The technique is based on an intuitively very simple notion, that of a (set of) tests "covering" the type of a set of variables (i.e. for any element that belongs to the type, at least one test will succeed). We give sound algorithms for determining coverings that are precise and efficient in practice. Based on this information, we show how to identify calls and procedures that can be guaranteed to not fail at runtime. Finally, we report on an implementation of our method and show that better results are obtained than with previously proposed approaches.


Rémy Haemmerlé

Tuesday, May 4, 2010

Rémy Haemmerlé, Researcher, The CLIP Laboratory, UPM

Coinductives Semantics for Constraint Handling Rules

Abstract:

Constraint Handling Rules (CHR) is a concurrent committed-choice rule-based programming language introduced in the 1990s by Fruehwirth. While it has been originally designed for the design and implementation of constraint solvers, it has come into use as a general-purpose concurrent programming language. CHR shares with its spiritual ancestor, Constraint Logic Programming (CLP), nice declarative semantics consisting in a direct translation into logic. However, whereas fixpoint semantics is an important foundation of CLP, there is no equivalent notion for CHR that captures behaviors of the whole language. In this talk we will shortly present CHR and its logical semantics. Then we introduce a simple fixpoint semantics based on a biggest fixpoint computation. Finally we show the resulting language is an elegant framework to program using coinductive reasoning on infinite objects.


Gilles Barthe

Tuesday, April 27, 2010

Gilles Barthe, Research Professor, IMDEA Software Institute

Cryptography is decidable

Abstract:

Cryptography is decidable


Alejandro Sanchez

Tuesday, April 20, 2010

Alejandro Sanchez, PhD Student, IMDEA Software Institute

Towards Decidability for Concurrent Programs that Manipulate Singly Linked List Datatypes

Abstract:

Formal verification of imperative programs provides a high guarantee of software reliability, but it is usually a human intensive technique. Hence, verification is only used in practice for the most critical parts. Automation can aid the effectiveness of the verification process.

We are currently interested in the verification of concurrent programs that manipulate dynamic memory. Our approach is based on two fundamental blocks:

  • we use explicit regions (instead of Separation Logic) to reason about data structures allocated on the heap. This provide us a more flexible manner of describing heap structure, at the price of some program annotations.
  • we use general verification diagrams. A verification diagram is a sound and complete method to represent proofs of temporal property over reactive system. The verification process using verification diagrams is reduced to a collection of first-order verification conditions, together with a few (decidable) tests over finite graphs.

Our current goal is to be able to automatically check the validity of the verification conditions discharged from a verification diagram. As concurrents datatype keep in memory a set of nodes and pointers with some specific layout, the generated verification conditions can be expressed as predicates to specific theories. In the case of concurrent lists a decision procedure capable of reasoning about memory cells, regions and ideal lists is needed.

This Tuesday I will (try to) give a brief description of the most recent results in decision procedures of singly linked lists and some extensions.


Dragan Ivanović

Tuesday, April 13, 2010

Dragan Ivanović, Researcher, The CLIP Laboratory, UPM

Automatic derivation of continous-time models for service compositions

Abstract:

Service compositions define how serveral Web services (loosely coupled platform independent software components exposed on the Web with interfaces, operations and associated data types) can be put together to achieve a more complex processing task, which can be both long and cross-organizational.

Quality of Service (QoS) properties of a service composition, such as running time, are an important part of service usability for the end user. We discuss how to turn a formal service description based on discrete state transitions (in particular, a strand of Petri Nets) into a continuous time model that can be used to approximate running time of the composition, as well as time-varying utilization of the composition components. Such a model can be used for modeling different policies that the service provider may use.


Pierre Ganty

Tuesday, March 30, 2010

Pierre Ganty, Assistant Research Professor, IMDEA Software Institute

An Abstraction Refinement Approach for Model-Checking

Abstract:

Model-checking techniques suffer from the state explosion problem that prevents them to scale up. During the past decades, researchers have been trying to alleviate the state explosion problem using various techniques like partial order reduction or symbolic data structures or abstractions.

Defined in my PhD thesis, I will present an abstraction based approach to solve the model-checking of safety properties. By using abstractions, we trade precision for tractability: the abstracted model-checking problem is 'easier' to solve but its solution may be too imprecise to draw any conclusion on the original problem. In such a case, precision is recovered by refining the abstraction that is used. Iterating this process yields to increased precision and eventually a solution to the original problem. We will illustrate our approach through simple examples and conclude by giving an application of our approach to solve coverability problem for Petri nets. Our implementation of the above approach for Petri nets is the state of the art.

Our approach has been entirely formalized in the abstract interpretation framework which will be recalled at the beginning of the talk.


Laurent Mauborgne

Tuesday, March 16, 2010

Laurent Mauborgne, Associate Research Professor, École Normale Supérieure, Paris, France

Modification Shapes

Abstract:

A difficult point when verifying heap manipulating programs is to be able to separate precisely the part of the heap that might be modified by a procedure from the part that will never be touched by that procedure. This is a key point to prove invariants, and in experiments on proving the correctness of design patterns such as the composite pattern, it was indeed pointed as a very consuming task, in term of annotations or guidance of the theorem prover.

In this talk, we will show that it is possible to infer automatically such modification shapes in the context of abstract interpretation with shape graphs.

This is joint work with Anindya Banerjee.


Cesar Sanchez

Tuesday, March 9, 2010

Cesar Sanchez, Assistant Research Professor, IMDEA Software Institute and CSIC

New Results on Regular Linear Tempora Logic (with a quick intro to Automata, Logic and Games)

Abstract:

Regular Linear Temporal Logic (RLTL) is a temporal logic that extends the expressive power of Linear Temporal Logic (LTL) to all w-regular languages. This extension is achieved using only a first-order signature, without the need of fix-point operators or automata-like operators. Moreover, all operators of LTL and w-regular-expressions are translated linearly, as contexts with the same number of "holes". RLTL is based on the fundamental idea of an operator that generalizes controlled repetition. In this talk I will present two new results on RLTL, namely: (1) a variation of RLTL which allows negation normal forms, and (2) a translation of RLTL into alternating parity automata on words using only 3 colors, which is in fact a generalization of hesitant automata.

The talk is structured in two parts (of roughly 30 minutes each). In the first part, I will present some connections between automata theory on infinite words, temporal logics and games. I will use these results in the second part of the talk, in which I will sketch our results.


Manuel Hermenegildo

Tuesday, February 23, 2010

Manuel Hermenegildo, Research Professor and Scientific Director, IMDEA Software Institute

Abstract Interpretation-based Debugging, Verification, Certification, and Optimization in Practice

Abstract:

This is a followup to my previous talk on the topic. where I presented the basics of abstract interpretation and of parametric abstract interpretation frameworks. In this talk we will explore hands-on the application of these ideas in our abstract interpretation-based framework, by demonstrating how it performs program verification and certification, bug detection and location, generation and simplification of run-time tests, and program optimizations, including automatic parallelization.


Aleks Nanevski

Friday, February 19, 2010

Aleks Nanevski, Assistant Research Professor, IMDEA Software Institute

Structuring the Verification of Heap-Manipulating Programs

Abstract:

Most systems based on separation logic consider only restricted forms of implication or non-separating conjunction, as full support for these connectives requires a non-trivial notion of variable context, inherited from the logic of bunched implications (BI). We show that in an expressive type theory such as Coq, one can avoid the intricacies of BI, and support full separation logic very efficiently, using the native structuring primitives of the type theory. Our proposal uses reflection to enable equational reasoning about heaps, and Hoare triples with binary postconditions to further facilitate it. We apply these ideas to Hoare Type Theory, to obtain a new proof technique for verification of higher-order imperative programs that is general, extendable, and supports very short proofs, even without significant use of automation by tactics. We demonstrate the usability of the technique by verifying the fast congruence closure algorithm of Nieuwenhuis and Oliveras, employed in the state-of-the-art Barcelogic SAT solver.


Anindya Banerjee

Tuesday, February 16, 2010

Anindya Banerjee, Research Professor, IMDEA Software Institute

Region Logic for Local Reasoning about Global Invariants

Abstract:

Shared mutable objects pose challenges in reasoning, especially for data abstraction and modularity. We present a logic for error-avoiding partial correctness of programs featuring shared mutable objects. Using a first order assertion language, the logic provides heap-local reasoning about mutation and separation, via ghost fields and variables of type "region" (finite sets of object references). A new form of modifies clause specifies write, read, and allocation effects using region expressions; this supports a frame rule that allows a command to read state on which the framed predicate depends. We show the logic in use in proving the correctness of design patterns such as the composite pattern, the verification of which has been proposed as a challenge problem for specification and verification of sequential object-based programs.

Joint work with: David A. Naumann and Stan Rosenberg


Manuel “Stallman-Dominik” Hermenegildo

Tuesday, February 9, 2010

Manuel “Stallman-Dominik” Hermenegildo, Certified Org hacker, IMDEA (Free) Software

Getting organized collaboratively with Org

Abstract:

What to do when we have an empty slot in our popular Theory lunch series? Respond to the overwhelming popular demand for a tutorial on "org" and the collaborative environment that we have set up based on it. Org is a tool which can be used to organize information, take notes, manage todo lists, maintain an agenda, generate web sites, interface with cool phones and iCal, and much, much more. And all of that within the warm comfort of your emacs environment. While org was not designed per se for collaborative work, it turns out that it can be used easily for this purpose with a bit of tweaking and the help of a version maintenance system. Based on this idea we have set up a number of facilities that can automatically keep you up to date on events such as conference deadlines, meetings, trips, project report due dates, holidays, or when your boss will be in today. Come to the talk to learn how.


Software Seminar Series (S3) - Fall 2009