IMDEA initiative

Home > Events > - Previous Software Seminar Series

Software Seminar Series (S3) - Spring 2012

Wednesday, June 6, 2012

Abu Nasser Masud, PhD Student, The CLIP Laboratory, UPM

On the Termination of Integer Loops


In this talk, I will present our recent study on the decidability of termination of several variants of simple integer loops, without branching in the loop body and with affine constraints as the loop guard (and possibly a precondition). We show that termination of such loops is undecidable in some cases, in particular, when the body of the loop is expressed by a set of linear inequalities where the coefficients are from Z U {r} with r an arbitrary irrational; when the loop is a sequence of instructions, that compute either linear expressions or the step function; or when the loop body is a piecewise linear deterministic update with two pieces. The undecidability result is proven by a reduction from counter programs, whose termination is known to be undecidable. For the common case of integer constraint loops with rational coefficients only we have not succeeded in proving decidability nor undecidability of termination, however, this attempt led to the result that a Petri net can be simulated with such a loop, which implies some interesting lower bounds. For example, termination for a given (partial) input is at least EXPSPACE-hard.

Joint work with Amir M. Ben-Amram (The Tel-Aviv Academic College) and Samir Genaim (Complutense University of Madrid ).

Pablo Chico de Guzmán

Tuesday, May 8, 2012

Pablo Chico de Guzmán, PhD Student, IMDEA Software Institute

Tabled Logic Programming and Constraints


The evaluation of Prolog, the most successful Logic Programming language, is based on the SLD resolution strategy. SLD performs depth-first search and it is efficient in memory, but it can enter infinite loops even when there is a logical derivation for a Prolog query/program. Tabling attacks this declarativeness issue, ensuring termination of programs with the bounded term-size property by using a fixpoint procedure. We give an intuition how tabling execution proceeds, which is an evolution of memoization. As memoization, tabling can also improve the efficiency of programs which repeat computations and can be used to evaluate programs with stratified negation.

Constraint LP (CLP) is a natural extension of LP which attracts much attention. CLP systems apply constraint solving techniques which blend seamlessly with the characteristics of logical variables. Obviously, CLP systems can also benefit from the power of tabling in order to improve their declarativeness and (in many cases) also their efficiency. But existing solutions for the combination of tabling and CLP (TCLP) are unsatisfactory.

We present a complete implementation framework for TCLP, independent from the constraint solver. The constraint solver must provide entailment and projection operation for the tabling engine. We have successfully applied this framework to solve Timed Automata problems by combining tabling evaluation with difference constraints. Our TCLP framework offers fixpoint computations parametrized by constraint domains and it is embedded in a general purpose programming language (Prolog) with promising performance results.

Noam Zeilberger

Tuesday, April 24, 2012

Noam Zeilberger, Post-doctoral Researcher, IMDEA Software Institute

Some very preliminary thoughts on zero-knowledge in type theory


The notion of "zero-knowledge proof" (an instance of the more general notion of "interactive proof") is an intriguing concept from cryptography and complexity theory, originating in the 1980s. The intuitive idea is that by following a zero-knowledge protocol, one party ("the prover") manages to convince another party ("the verifier") of the truth of some proposition (e.g., that a particular graph contains a Hamiltonian cycle), but without revealing any of its secret knowledge for *why* the proposition is true (e.g., here, the verifier learns nothing about the actual Hamiltonian cycle, other than that it exists).

In this INFORMAL TALK, I will begin by recalling the formal definitions of interactive proof and of zero-knowledge, and then describe some of my ongoing work trying to better understand these concepts from the perspective of constructive logic and type theory. Zero-knowledge seems to represent a paradox to the pure constructivist (what does it *mean* to prove an existential statement without also showing how to build an object?), and my goal is to try to resolve this paradox by taking a broader view of constructive logic, in particular including the important notions of continuations and side-effects. After the initial introduction, the talk will consist of an interactive walk-through of some Haskell code.

Álvaro García Perez

Tuesday, April 10, 2012

Álvaro García Perez, PhD Student, IMDEA Software Institute

Standar strategies, spine strategies and full reduction in pure lambda calculi


Normal order is the standard full-reducing strategy of the pure lambda calculus. It delivers the normal form of a term if it exists or diverges otherwise. It can be defined as a hybrid strategy following either a normal (leftmost) or a spine (quasi-leftmost) approach. Which are the proper corresponding versions in Plotkin's lambda-value calculus? The answer is value normal order and value spine order. We construct these strategies from meta-theoretic considerations and introduce a precise characterisation for hybrid strategies.

The Standardisation Theorem and the Quasi-Leftmost Reduction Theorem substantiate the normal and the spine approach respectively in the classical lambda calculus. Plotkin gave the former for a calculus with strict functional semantics. We provide an analogous of the latter for Plotkin's calculus, which we call the Negative-Weak Reduction Theorem. This shows that value spine order is correct and complete for full-reduction in lambda-value. Value spine order speeds up normalisation by early contracting some of the redices in the body of lambda abstractions, which has an impact in various language mechanisms (thunking) and features (beta-conversion testing).

This joint work with Pablo Nogueira has been submitted to ICFP2012.

Tuesday, March 27, 2012

Emilio Gallego, PhD Student, BABEL, UPM

A Categorical Abstract Machine for Logic Programming


We present an efficient declarative execution mechanism for Logic Programming. Our machine is based on the categorical version of the calculus of relations --- allegory theory. In particular, we use tabular allegories, whose main property is that every relation is tabulated by a pair of functions. For an allegory R, the set of tabulations is a regular category called Map(R).

A suitable allegory for Logic Programming is generated from a regular completion of a Lawvere Category of a logic program. Lawvere Categories represent algebraic theories and in our setting they capture the signature of the program. The new notion of "Regular Lawvere Category" is a perfect candidate for the category of maps of a given allegory.

Our machine is fully based on relation composition. This single primitive encompasses unification, garbage collection, parameter passing, environment creation and destruction.

In this talk, we will quickly survey the categorical foundations and motivations for the work. After that, we will present the compilation procedure, machine specification, the correspondence of categorical structures with implementation concepts such as pointers, registers and instructions. Last, we will discuss some of the extensions to our computational model such as constraints, types, functions and monads.

Ruy Ley-Wild

Wednesday, March 21, 2012

Ruy Ley-Wild, Post-doctoral Researcher, IMDEA Software Institute

Non-Monotonic Self-Adjusting Computation


Self-adjusting computation is a language-based approach to writing incremental programs that respond dynamically to input changes by maintaining a trace of the computation consistent with the input, thus also updating the output. For monotonic programs, i.e., where localized input changes cause localized changes in the computation, the trace can be repaired efficiently by insertions and deletions. However, non-local input changes can cause major reordering of the trace; in such cases, updating the trace can be asymptotically equal to running from scratch. We show how to eliminate the monotonicity restriction by generalizing the update mechanism to use trace slices, which are partial fragments of the computation that can be reordered with some bookkeeping.

Boris Köpf

Tuesday, March 6, 2012

Boris Köpf, Assistant Research Professor, IMDEA Software Institute

Automatic Quantification of Cache Side-Channels


The latency gap between caches and main memory has been successfully exploited for recovering sensitive input to programs, such as cryptographic keys from implementation of AES and RSA. So far, there are no practical general-purpose countermeasures against this threat. In this talk I report on a novel approach for automatically deriving upper bounds on the amount of information about the input that an adversary can extract from a program by observing the CPU's cache behavior. At the heart of the approach is a novel technique for counting the concretizations of abstract cache states that enables us to connect state-of-the-art techniques for static cache analysis and quantitative information-flow. We implement our counting procedure on top of the AbsInt TimingExplorer, the most advanced engine for static cache analysis. We use our tool to perform a case study where we derive upper bounds on the cache leakage of a 128-bit AES executable on an ARM processor with a realistic cache configuration. We also analyze this implementation with a commonly suggested (but until now heuristic) countermeasure applied, obtaining a formal account of the corresponding increase in security. Joint work with Laurent Mauborgne and Martin Ochoa.

Ruy Ley-Wild

Tuesday, February 28, 2012

Ruy Ley-Wild, Post-doctoral Researcher, IMDEA Software Institute

Subjective Concurrent Separation Logic


From Owicki-Gries' resource invariants and Jones' rely/guarantee to modern variants based on separation logic, axiomatic program logics for concurrency have a limited form of compositionality. Proving non-trivial properties usually requires the use of auxiliary state, which is ``objective'' in the sense that each thread's auxiliary state is given a globally-unique name. Since auxiliary state exposes the program's global structure to each local thread, axiomatic approaches fail to be compositional in the presence of auxiliary state.

We propose ``subjective'' auxiliary state as a solution to this historical limitation of axiomatic program logics. Each thread can be verified with a subjective view on auxiliary state: auxiliary state can be partitioned to either belong to the self (i.e., the thread itself) or to the other (i.e., its environment). We formulate Subjective Concurrent Separation Logic as a combination of the resource invariant method, separation logic, and subjective auxiliary state for a first-order, stateful, concurrent language. The logic provides compositional verification even when auxiliary state is needed.

Alexander Malkis

Tuesday, February 21, 2012

Alexander Malkis, Post-doctoral Researcher, IMDEA Software Institute

Verification of Software Barriers


This poster describes frontiers in verification of the software barrier synchronization primitive. So far most software barrier algorithms have not been mechanically verified. We show preliminary results in automatically proving the correctness of the major software barriers.

Software Seminar Series (S3) - Fall 2011