IMDEA Software

IMDEA initiative

Home > Events > - Previous Software Seminar Series

Software Seminar Series (S3) - Fall 2008

Tuesday, November 25, 2008

Irena Trajkovska, PhD Student, CLIP, Technical University of Madrid (UPM), Spain

Soundness Properties of DFA-based Business Protocols

Abstract:

As the adoption of SOA grows in enterprises and businesses and more and more complex information systems are reworked or designed anew on the basis of the emerging service-oriented paradigm, the complexity of the conversations that take place among services provided by the different players magnifies accordingly. Connecting complex services by "pairing" them on a one-to-one basis does not suffice any longer. Instead, it becomes critical to support complex, possibly long-lasting multi-party conversations involving a diversity of services. Operations exposed by the services can not any longer be restricted to only a service provider and a service requester, but have to scope up to a multi-party environment in which each service describes how it can consume and produce messages in conversations involving arbitrary numbers of participants.

The research community is investigating the formalisms best suited to this task, which are collectively called business protocol languages. Proposals for business protocol languages range from adapted business process languages like BPELlight, to more formal approaches based on π-calculus, Deterministic Finite Automata (DFA) and Petri-Nets.

Conversations can be structured either as orchestrations that describe the local point of view of one of the participants (called subject), or as choreographies that provide the global point of view, in which all the different participants are inter-connected. Unfortunately, none of the business protocol languages proposed so far leverages effectively orchestrations and choreographies, but instead they all focus on either. Relating orchestrations and choreographies is instrumental to the bigger picture of business protocol evolution, the evolving of business protocol models to address mutated or emerging requirements such as changes to the behavior of partners, KPIs and QoS parameters to be met, and so on.

The talk will describe a formal meta-model for representing business protocols using Deterministic Finite Automata enriched with time conditions on the transitions. The running example will be a complex multi-party conversation. On top of the running example we will present a set of soundness properties for choreography business protocol models: time-soundness, participant-soundness and full-soundness. The soundness properties are crucial for choreography business protocols to be used in distributed environments. For instance, the full-soundness property guarantees the progression of the execution of business protocol instances.


Tuesday, November 18, 2008

Guillem Marpons, PhD Student, Babel, Technical University of Madrid (UPM), Spain

Coding Rule Checking

Abstract:

Coding rules are often used in industry to codify software best practises and avoid the many hazardous constructions present in languages such as C or C++. Rules might also be devised to adapt programs to the available resources in specific execution environments. Predictable and customisable tools are needed to automatically measure adherence to these rules.

Many rules can be reformulated as logic programs, which provides both a framework for formal rule definition and a procedure for automatic rule conformance checking using a Prolog engine. Those properties of the software to be analysed involved in rule definition need to be transcribed into the same formalism used in rule specification. In our prototype, This is performed during normal compilation with a modified version of the GCC compiler.

We are working on the definition of a completely declarative logic-based language that facilitates the definition of new rules and the formalisation of some wide-spread rule sets. It will need to be expressive enough to codify the many program properties these rules rely on, and to accommodate some of the many analyses computed by modern compilers.


Remy Haemmerle

Tuesday, November 11, 2008

Remy Haemmerle, Post-doctoral Researcher, CLIP, Technical University of Madrid (UPM), Spain

Temporal Semantics of Co-Logic Programming

Abstract:

Co-Logic Programming (co-LP) is a paradigm recently introduced that combines both inductive and coinductive logic programming. While semantics of classical LP is based on an induction principle i.e. a least fixpoint computation, coinductive LP is based on an coinduction principle i.e. a greatest fixpoint computation. Some of the promising applications of co-LP are then handling of rational structures, concurrent LP, and model checking of infinite state automata. Because coinductive LP is founded on the perhaps nonintuitive notion of coinduction, it is somehow difficult to apprehend its semantics. It is even more hazardous in the general case of co-LP that nests coinduction with induction, weakening the confidence of a programmer into both co-LP interpreters and co-LP programs.In order to improve intuitive understanding of co-LP we will describe its semantics using mu-calculus, a well-known class of temporal logics with explicit fixpoint operators that generalizes many temporal logics as LTL, CTL and CTL*.


Tuesday, November 4, 2008

Miky Zamalloa, PhD Student, CLIP and Universidad Complutense de Madrid, Spain

Interpretive Decompilation of Bytecode to Prolog and its Applications

Abstract:

Decompiling low-level code to a high-level intermediate representation facilitates the development of analyzers, model checkers, etc. which reason about properties of the low-level code (e.g., Java bytecode, .NET Common Intermediate Language, etc). Interpretive decompilation consists in partially evaluating an interpreter for the low-level language (written in the high-level language) w.r.t. the code to be decompiled. There have been proofs-of-concept that interpretive decompilation is feasible, but there remain important open issues when it comes to decompile a real language: does the approach scale up? is the quality of decompiled programs comparable to that obtained by ad-hoc decompilers? do decompiled programs preserve the structure of the original programs? In this work we address these issues by presenting, to the best of our knowledge, the first modular scheme to enable interpretive decompilation of low-level code to a high-level representation, namely, we decompile bytecode into Prolog. We introduce two notions of optimality. The first one requires that each method/block is decompiled just once. The second one requires that each program point is traversed at most once during decompilation. We demonstrate the impact of our modular approach and optimality issues on a series of realistic benchmarks. Decompilation times and decompiled program sizes are linear with the size of the input bytecode program. This demostrates empirically the scalability of modular decompilation of low-level code by partial evaluation. Finally, as an application of the above decompilation scheme, I will outline how we can automatically generate structural test-cases for bytecode programs by CLP partial evaluation.


Alvaro Garcia

Tuesday, October 28, 2008

Alvaro Garcia, PhD Student, IMDEA Software Institute

Can type systems help to write programs in a more "natural" way?

Abstract:

Church numerals are a functional encoding of natural numbers. In the pure (untyped) lambda calculus, the definition of certain operations on Church numerals (addition, multiplication, etc) can be expressed "naturally", in the sense of easy-to-understand and re-use of previously defined values. I've tried encoding these definitions in Haskell but the Rank-n type system infers rather strange types for numerals and operators. In his paper on System F, Reynolds suggested a polymorphic type for Church numerals and for arithmetic expressions. The way he defined them is not the same as the "natural" one (which can be expressed in System F as well). I have found that it can be expressed in Haskell (with some extra trickery) but it cannot be expressed in Agda (due to restrictions on impredicativity). I wonder if there is a type more general than Reynolds' that is more specific than the one inferred by Haskell's.

I have an interest in how to express programs more "naturally" and on the question of which features of the type system can help to achieve that. I found Haskell's Rank-n system and Agda's predicativity a hurdle for this case study. There are several proposals for making type inference possible with less and less type annotations (MLF, HMF, etc). For my PhD, I'm learning about types in programming and about the role of type checking and type inference. I'm curious about the limits of inference (not checking) and about the mechanisms that let me express types and functions in a more "natural" way.


Tuesday, October 21, 2008

Terran Lane, Associate Professor, University of New Mexico, USA

Where Probability Meets First Order Logic: Open Questions in Relational Learning

Abstract:

Machine learning (ML) is essentially the field of identifying functions from observed data. For example, we can model relationships like the chance that it will rain given observations of temperature and pressure or the chance that a given program is infected with malware given observations of its compiled code.

For the past two or three decades, the bulk of work in ML has employed data representations that are essentially propositional -- all data elements are represented as fixed-length vectors of variable values. This representation works well for tasks like weather monitoring, but is not terribly well suited for modeling more complicated objects, such as programs. In response, a new approach has emerged in the last few years: so-called relational learning, in which data are represented with more sophisticated languages, such as first order logic (FOL). Unlike traditional FOL, however, these frameworks typically include a probability model in order to handle noise in the data, missing data, uncertainty in the knowledge base, and so on. While these frameworks have proven quite promising, there are a number of substantial open questions.

In this interactive talk, I will introduce machine learning in general and lay out a prominent current model of FOL+probability. I will outline a number of open problems in this realm, and sketch my current thoughts on resolving some of them. I am actively seeking collaborators, so I welcome questions, discussion, and suggestions on any of these open problems.


Julián Samorski-Forlese

Tuesday, October 14, 2008

Julián Samorski-Forlese, Research Intern, IMDEA Software Institute

Preservation of proof obligations for hybrid verification methods

Abstract:

Program verification environments increasingly rely on hybrid methods that combine static analyses and verification condition generation. While such verification environments operate on source programs, it is often preferable to achieve guarantees about executable code.

In this talk, I will briefly present the paper [1] in which we show that, for a hybrid verification method based on numerical static analysis and verification condition generation, compilation preserves proof obligations and therefore it is possible to transfer evidence from source to compiled programs. Our result relies on the preservation of solutions of analysis by compilation, that is achieved by relying on a bytecode analysis that performs symbolic execution of stack expressions. Finally, a proof of soundness of hybrid verification methods is shown in the paper.

[1] G. Barthe, C. Kunz, D. Pichardie and J. Samborski-Forlese. Preservation of proof obligations for hybrid verification methods. In SEFM 08. To appear.


Juan Manuel Crespo

Tuesday, October 7, 2008

Juan Manuel Crespo, Research Intern, IMDEA Software Institute

Separation Logic

Abstract:

Specification and Verification of imperative programs that manipulate mutable data structures has always been a challenging problem. The main difficulty resides in the fact that when reasoning about such programs a global account of the heap has to be considered. Reynolds, O'Hearn and Yang have proposed an extension of Hoare Logic, Separation Logic, that enables local, compositional reasoning by allowing to focus only in the instructions "footprint", i.e., the portion of the heap the instructions accesses. In this talk I will try to illustrate the key ideas behind Separation Logic informally and discuss some of its limitations. Also, I will briefly comment on the related ongoing work we are doing with César Sánchez and Gilles Barthe.


Peter Stuckey

Tuesday, September 30, 2008

Peter Stuckey, Professor, The University of Melbourne, Australia

The G12 project and ACD term rewriting

Abstract:

The G12 project aims to allow one to model a combinatorial optimization problem in a high level way, independent of the solving technology that you will use to solve it. The model is then transformed to a low level model that can be executed by a constraint solver. The same model can be transformed to use finite domain constraint solving, mixed integer linear programming, Boolean satisfaibility (SAT) solving, local search (e.g. simulated annealing) as well as hybrid of the different approaches.

The requirement to efficiently transform high level models lead us to develop Cadmium a AC term rewriting language, which treats conjunction specially. So called ACD term rewriting allows one to refer in a rewrite rule to all the constraints that hold in the context of the term being rewritten. This allows one to write very concise and transparent code for rewriting logical formulae. This power of ACD term rewriting is unsurprisingly the hard part to implement efficiently.

In the talk I will briefly talk about G12 and the introduce Cadmium, before talking about how to implement ACD term rewriting, and point out where we still need to improve the implementation of Cadmium.


Maria Garcia de la Banda

Tuesday, September 23, 2008

Maria Garcia de la Banda, Associate Professor, Monash University, Australia

Symmetry Detection and Symmetry Breaking

Abstract:

Many constraint problems have a considerable number of symmetries. Detecting and breaking these symmetries is a very active research area. This is due to the potential speedups one can gain by avoiding the exploration of parts of the search tree that are symmetric to others already explored. In this talk I will present the work developed in collaboration with Chris Mears, Bart Demoen and Mark Wallace on symmetry detection and symmetry breaking. Regarding symmetry detection we present the first framework capable of accurately detecting symmetries of a problem (rather than of an instance of the problem). Regarding symmetry breaking, we present the first implementation that can safely be used as a default symmetry breaking method thanks to its low overhead and the fact it does not interfere with the search heuristic used.


Software Seminar Series (S3) - Spring 2008