IMDEA initiative

Home > Events > - Previous Software Seminar Series

Software Seminar Series (S3) - Spring 2011

Rémy Haemmerlé

Tuesday, June 28, 2011

Rémy Haemmerlé, Post-doctoral Researcher, The CLIP Laboratory, UPM

Observational Equivalences for Linear Logic Concurrent Constraint Languages


Linear logic Concurrent Constraint programming (LCC) is an extension of concurrent constraint programming (CC) where the constraint system is based on Girard's linear logic instead of the classical logic. In this talk, we address the problem of observable equivalences for this programming framework. For this purpose, we present a new and truly distributed operational semantics for LCC based on a label transition system. Then we present different notions of observational equivalences inspired by the state of art of Milner's process calculi.

Boris Köpf

Tuesday, June 14, 2011

Boris Köpf, Assistant Research Professor, IMDEA Software Institute

Information-theoretic Bounds for Differentially Private Mechanisms


There are two active and independent lines of research that aim at quantifying the amount of information that is revealed by computing on confidential data. Each line of research has developed its own notion of confidentiality: on the one hand, differential privacy is the emerging consensus guarantee used for privacy-preserving data analysis. On the other hand, information-theoretic notions of leakage are used for characterizing the confidentiality properties of programs in language-based settings. In this talk I will present recent work where we establish the first formal connections between both notions of confidentiality, and where we compare them in terms of the security guarantees they deliver. Joint work with Gilles Barthe.

Pedro López-García

Tuesday, June 7, 2011

Pedro López-García, Researcher, IMDEA Software Institute

Verification and Debugging of Resource Usage Properties in the CiaoPP System


We present a framework for (static) verification of general resource usage program properties. The framework extends the criteria of correctness as the conformance of a program to a specification expressing non-functional global properties, such as upper and lower bounds on execution time, energy, or user defined resources, given as functions on input data sizes. We have defined an abstract semantics for resource usage properties and operations to compare the (approximated) intended semantics of a program (i.e., the specification) with approximated semantics inferred by static analysis. These operations include the comparison of arithmetic functions. A novel aspect of our framework is that the outcome of the static checking of assertions can express intervals for the input data sizes such that a given specification can be proved for some intervals but disproved for others. Such a specification can include both upper- and lower-bound resource usage functions, i.e., it can express intervals where the resource usage of a program is supposed to be included in. Specifications can also include preconditions expressing intervals within which the input data size of a program is supposed to lie. We have implemented our techniques within the Ciao/CiaoPP system in a natural way, so that the novel resource usage verification blends in with the CiaoPP framework that unifies static verification and static debugging (as well as run-time verification and unit testing).

Dragan Ivanovic

Tuesday, May 31, 2011

Dragan Ivanovic, PhD Student, The CLIP Laboratory, UPM

Attribute Inference in Complex Service Workflows Based on Sharing Analysis


In Service-Oritented Computing, business processes are usually expressed in terms of workflows that describe control and data dependencies between loosely coupled components (usually external Web services accessible through a network). Besides sequences and parallel flows, workflows may include branches, loops, and other complex control constructs.

Knowing properties of data items and activities that access data can help with a number of design- and run-time activities in the service world. These include: workflow fragmentation (e.g. for distributed execution), data compliance checking (e.g. making sure that the given information is sufficient for a task) and robust top-down design (e.g. deriving properties for sub-workflows). In this case, we consider a set of abstract Boolean properties (i.e. attributes) whose semantics depends on the application domain. We'll use the example of attributes that describe information content, and workflow fragmentation based on information flow across organizational domains.

The automated attribute inference relies on deriving a logic program (Horn clause) based representation of a workflow, and plugging into it the appropriate initial setups for arguments that reflect the attribues of workflow input data. Then, we apply sharing analysis (an instance of abstract interpretation), and reinterpret the results as a concept lattice to assign input attributes to intermediate data items and activities in the workflow.

Álvaro García

Tuesday, May 24, 2011

Álvaro García, PhD Student, IMDEA Software Institute

Full-Reduction in open strict calculus


Theorem provers based on dependent types rely on efficient full-reducers implementing a beta-equivalence tester for their typing rules. We present a full-reducing strategy in a strict calculus allowing open terms. We traverse a path, starting at weak-reduction in a non-strict calculus restricted to closed terms (call-by-name). By successively strengthening the congruence rules (reducing under lambda abstractions), assuring normalisation (resorting to hybrid strategies to find a normal form, if it exists), restricting the notion of reduction (allowing only values as arguments) and generalizing the set of legal terms (including open terms) we get a normalizing strategy in Plotkin's lambda_V theory. We show how these four different aspects (strengthening congruence, normalisation, by-value semantics and open terms) are entangled in this traversal. Meta-theoretic issues regarding standardisation and confluence lead the way to full-reduction.

Alexander Malkis

Tuesday, May 17, 2011

Alexander Malkis, Post-doctoral Researcher, IMDEA Software Institute

A Lower Bound on Resource Verification


In multi-threaded programs data is often separated into lock-protected resources. Properties of those resources are typically verified by modular, Owicki-Gries-like methods. The Owicki-Gries proof structure follows the program syntax in an intuitive, cheap and easily implementable and extendable manner (Concurrent Separation Logic, Chalice, VCC, RGSep, ...). Alas, some properties do not admit this proof structure. So far nothing was known about the boundary between properties that do have this proof structure and properties that don't have it. We will characterize a class of properties that do admit an Owicki-Gries proof structure, showing what properties are always cheaply provable and a worst-case approximation inherent to the Owicki-Gries method.

Aleks Nanevski

Tuesday, May 10, 2011

Aleks Nanevski, Assistant Research Professor, IMDEA Software Institute

Verification of Information Flow and Access Control Policies with Dependent Types


This talk presents Relational Hoare Type Theory (RHTT), a language and verification system for expressing and verifying rich information flow and access control policies via dependent types. A number of security policies, such as conditional declassification, information erasure, and state-dependent information flow and access control, which have been formalized separately in security literature can all be expressed in RHTT using only standard type-theoretic constructions such as monads, higher-order functions, abstract types, abstract predicates, and modules. RHTT can reason about such policies in the presence of dynamic memory allocation, deallocation, pointer aliasing and arithmetic.

Germán Delbianco

Tuesday, May 3, 2011

Germán Delbianco, Research Intern, IMDEA Software Institute

Applicative Shortcut Fusion


In functional programming one usually writes programs as the composition of simpler functions. Consequently, the result of a function might be generated only to be consumed immediately by another function. This potential source of inefficiency can often be eliminated using a technique called shortcut fusion, which fuses both functions involved in a composition to yield a monolithic one. In this article we look at shortcut fusion for applicative computations. Applicative functors provide a model of computational effects which generalise monads, but they favour an applicative programming style. To the best of our knowledge, this is the first time shortcut fusion is analysed in an applicative setting.

Tuesday, April 26, 2011

Guillem Marpons, Researcher, BABEL, UPM

Layered Coding Rule Definition and Enforcing Using LLVM


Coding rules are used in industry to enforce good software practices that improve software reliability and maintainability. Some examples of standard rule sets are MISRA-C/C++ and High Integrity C++.

In a previous work we developed a framework to formalize coding rules as logic programs, that can be later run on a knowledge base of the program to inspect. In order to formalize all the rules in standard sets the knowledge base needs to contain information about the program that ranges from syntactic to undecidable semantic properties. As many of the complex properties that coding rules depend upon (or approximations to them) are computed by static analyzers present in the literature, we aim at reusing them presenting their result in a uniform way that can be easily understood and combined by engineers responsible for defining new rules. In fact, some of those analyzes are implemented in modern compilers, but their results are almost exclusively applied to optimization.

In this talk we present initial work on trying to use the LLVM compiling infrastructure ( for enriching our program facts base with pointer aliasing information, and how this information can be used to define and enforce actual coding rules.

This is joint work with Julio Mariño.

Pierre Ganty

Tuesday, April 12, 2011

Pierre Ganty, Assistant Research Professor, IMDEA Software Institute

Petri-nets and finite-index Context-free languages


We investigate the issue of determining whether the intersection of a context-free language (CFL) and a Petri net language (PNL) is empty. Our contribution to solve this long-standing problem which relates, for instance, to the reachability analysis of recursive programs over unbounded data domain, is to identify a class of CFLs called the finite- index CFLs for which the problem is decidable. The k-index approximation of a CFL is obtained by discarding all the words that cannot be derived within a budget k on the number of non-terminal symbols. A finite-index CFL is thus a CFL which coincides with its k-index approximation for some k. We decide whether the intersection of a finite index CFL and a PNL is empty by reducing it to the reachability problem of a Petri nets with ordered inhibitor arcs, a class of infinite state systems for which reachability is known to be decidable. Conversely, we show that the reachability problem for a Petri net with ordered inhibitor arcs reduces to the emptiness problem of a finite-index CFL intersected with a PNL.

Cesar Sanchez

Tuesday, April 5, 2011

Cesar Sanchez, Assistant Research Professor, IMDEA Software Institute

Efficient Complementation of Alternating Parity Automata (or why strong alternating automata is not that expensive)


The automata-theoretic approach to model checking reduces a model checking problem to automata constructions and automata decision problems. In particular, a linear time specification is translated into an equivalent automata, then complemented, composed with the transition system, and finally checked for emptiness.

In this talk I will present new results related to the complementation of alternating automata with the parity acceptance condition (APW), how to translate temporal logic specifications into APW with few colors, and efficient translations from APW into nondeterministic Buchi automata (NBW) that can be the basis for emptiness checking.

Juan Caballero

Tuesday, March 29, 2011

Juan Caballero, Assistant Research Professor, IMDEA Software Institute

Differential Slicing: Identifying Causal Execution Differences for Security Applications


A security analyst often needs to understand two runs of the same program that exhibit a difference in program state or output. This is important, for example, for vulnerability analysis, as well as for analyzing a malware program that features different behaviors when run in different environments. In this paper we propose a differential slicing approach that automates the analysis of such execution differences. Differential slicing outputs a causal difference graph that captures the input differences that triggered the observed difference and the causal path of differences that led from those input differences to the observed difference. The analyst uses the graph to quickly understand the observed difference. We implement differential slicing and evaluate it on the analysis of 11 real-world vulnerabilities and 2 malware samples with environment-dependent behaviors. We also evaluate it in an informal user study with two vulnerability analysts. Our results show that differential slicing successfully identifies the input differences that caused the observed difference and that the causal difference graph significantly reduces the amount of time and effort required for an analyst to understand the observed difference.

Alexander Malkis

Tuesday, March 22, 2011

Alexander Malkis, Post-doctoral Researcher, IMDEA Software Institute

A Practical Generic Privacy Language


We present a declarative language with a formal semantics for specifying both users' privacy preferences and services' privacy policies. Expressiveness and applicability are maximized by keeping the vocabulary and semantics of service behaviours abstract. A privacy-compliant data-handling protocol for a network of communicating principals is described.

Laurent Mauborgne

Tuesday, March 15, 2011

Laurent Mauborgne, Researcher, IMDEA Software Institute

Solvers, Theories, and Static Analysis


Recent progress in solvers modulo theories reach a point where it seems feasible to perform automatic static analysis of whole programs (and not just paths) using solvers. We will discuss the expected advantages and drawbacks of such approaches based on first-order logic domains, and provide semantic foundations to discuss their soundness. If time permits, we will also show the links between the reduction process used in abstract interpretation and the Nelson-Oppen decision procedure. The ultimate goal will be to use such links to combine logical abstract domains using solvers and more ad-hoc abstract domains in order to get the best of both approaches in one analyzer.

Alexey Gotsman

Wednesday, March 9, 2011

Alexey Gotsman, Assistant Research Professor, IMDEA Software Institute

Modular verification of preemptive OS kernels


Modern concurrency logics are sound only with respect to the interleaving semantics, which assumes scheduling is implemented correctly. This cannot be taken for granted in preemptive OS kernels, where the correctness of the scheduler is interdependent with the correctness of the code it schedules. Come to the talk to learn about how we can do things for real! This is joint work with Hongseok Yang.

Tuesday, February 8, 2011

Felipe Andrés Manzano, PhD Student, FaMAF, Universidad Nacional de Córdoba - CONICET, Argentina

Efficient Symbolic Execution for Analysing Cryptographic Protocol Implementations


The analysis of code that uses cryptographic primitives is unfeasible with current state-of-the-art symbolic execution tools. We develop an extension that overcomes this limitation by treating certain concrete functions, like cryptographic primitives, as symbolic functions whose execution analysis is entirely avoided; their behaviour is in turn modelled formally via rewriting rules. We define concrete and symbolic semantics within a (subset) of the low-level virtual machine LLVM. We then show our approach sound by proving operational correspondence between the two semantics. We present a prototype to illustrate our approach and discuss next milestones towards the symbolic analysis of fully concurrent cryptographic protocol implementations.

Bio: Felipe Andres Manzano obtained his degree in Computer Science from FCEIA, Universidad Nacional de Rosario, Argentina. He is currently a PhD candidate at FaMAF, Universidad Nacional de Cordoba, Argentina, working under the supervision of Ricardo Corin. His research focuses on testing and formal verification of cryptographic protocols and their implementations.

Tomas Poch

Thursday, January 20, 2011

Tomas Poch, Research Intern, IMDEA Software Institute

Pattern-based verification of concurrent program


Reachability of a program location in the concurrent program is an undecidable problem. The pattern based verification is a technique deciding the related problem - whether the program location is reachable assuming that the threads are interleaved in a way corresponding to the given pattern. The pattern takes the form w_1* ... w_n* - sequence of words, where each of them can repeat arbitrarily many times. The talk will formulate the task of pattern based verification as a language problem and present the decision procedure in this manner. The talk will also cover the application of the technique in context of parallel processes communicating in the rendez-vous style as well as parallel processes with shared memory, touch the recent complexity results and comment on the ongoing work on the tool implementation.

Software Seminar Series (S3) - Fall 2010