Álvaro Fernández, PhD Student, BABEL, UPM
Differential privacy is a notion of confidentiality that protects the privacy of individuals while allowing useful computations on their private data. Deriving differential privacy guarantees for real programs is a difficult and error-prone task that calls for principled approaches and tool support. Approaches based on linear types and static analysis have recently emerged; however, an increasing number of programs achieve privacy using techniques that cannot be analyzed by these approaches. Examples include programs that aim for weaker, approximate differential privacy guarantees, programs that use the Exponential mechanism, and randomized programs that achieve differential privacy without using any standard mechanism. Providing support for reasoning about the privacy of such programs has been an open problem.
In this talk we will present CertiPriv, a machine-checked framework that relies on a (quantitative) probabilistic relational Hoare logic to reason about differential privacy.
Federico Olmedo, PhD Student, IMDEA Software Institute
In this talk I will present two algorithmical approaches to static partial order reduction for Markov Decision Processes. These techniques can be used to alleviate the state space explosion problem associated to Model Checking of probabilistic systems. These techniques only use information obtained by a static analysis over the behaviour of the programs and, therefore, can be combined with Symbolic model checking, which has outperformed explicit model checking with respect to the size of systems tractable. The talk is aimed at anyone interested in formal verification. Basic notions about model checking and partial order reductions will be provided, so no prior knowledge is required.
Those algorithms are also joint work with Frank Ciesinski and Christel Baier, from Technische Universität Dresden.
José Miguel Rojas Siles, PhD Student, The CLIP Laboratory, UPM
Test Data Generation aims at automatically obtaining test inputs that can be used to validate the functional behaviour of the program through the use of software testing tools. In this talk, we will introduce resource-aware TDG, whose purpose is to generate test cases (from which the test inputs are obtained) with associated resource consumptions. The framework is parametric with respect to the notion of resource (it can measure memory, steps, etc.) and allows using software testing to detect bugs related to non-functional aspects of the program. As a further step, we introduce resource-driven TDG, which uses resource consumption information to guide the Test Data Generation process. We show that, given a resource policy, TDG can be guided to generate test cases that adhere to the policy and to avoid the generation of test cases which violate it.
Alexander Malkis, Post-doctoral Researcher, IMDEA Software Institute
We describe semi-automatic verification of the software barrier synchronization primitive. We improve the state of the art in automatically proving the correctness of software barrier algorithms. On one hand, we show the best possible results in proving correctness with off-the-shelf automatic verification tools, in passing improving the capabilities of one of these tools. On the other hand, for each algorithm we show a slightly more complicated version that still lies beyond the current automatic verification frontier. We have automatically verified a central barrier, a static tree barrier, a combining tree barrier, a dissemination barrier, a tournament barrier, a C implementation of a barrier, and a barrier with its client.
German Delbianco, PhD Student, IMDEA Software Institute
Hoare Type Theory (HTT) is as a powerful tool for "structuring the verification of heap manipulating programs": it complements higher-order dependently-typed systems like Coq with features for specification and verification of low-level stateful operations in the style of Hoare Logic. This talk presents an ongoing work triggered by the following question: what if the structure of the program is not so structured? Hoare-style reasoning about "jumpsy" programs is long known to be intricate, to say the least. Our approach to tackle this matter is to extend the HTT language with a control operator, callcc, similar to the one you would find in your favourite functional programming language. We will discuss our current implementation, some results, several pitfalls we have encountered, and a couple ideas for avoiding falling off the cliff.
Julio Mariño, Researcher, BABEL, UPM
Java monitors as implemented in the java.util.concurrent.locks package provide no-priority nonblocking monitors. That is, threads signalled after blocking on a condition queue do not proceed immediately, but they have to wait until both the signalling thread and also other threads that might have requested the lock release it. This can be a source of errors (if threads that get in the middle leave the monitor in a state incompatible with the signalled thread re-entry) or inefficiency (if repeated evaluation of preconditions is used to ensure safe re-entry). A concise implementation of priority nonblocking monitors in Java is presented. Curiously, our monitors are implemented on top of the standard no-priority implementation. In order to verify the correctness of our solution, a formal transition model (that includes a formalisation of Java locks and conditions) has been defined and checked using Uppaal. This formal model can be reused for model checking concurrent Java programs which make use of the standard locks library (without reentrancy). The talk will also present ongoing work on a full correctness proof using deductive methods.
This is joint work with Angel Herranz.
As the speaker is aware that the topic might be too specific, a little abstruse, that there might possibly be some overexposure to concurrency in recent presentations and that the original spirit of the theory lunch is hardly recognizable for some time now, an alternate 15 min. crash course on termination, abstract interpretation and process calculi could replace the talk above.
Mark Marron, Researcher, IMDEA Software Institute
I will be giving an informal presentation of results from a recently concluded study on the heap structures that appear in "real-world" programs. This work, done in conjunction with collaborators at UC Davis and Microsoft Research, has substantial implications in a number of areas that are of interest to the researchers at Imdea. In particular the central message appears to be that heap is, in practice, a relatively simple structure where the vast majority of sharing (aliasing) and shapes that are present can be described in relatively simple ways that are closely related to standard programming idioms. This has interesting implications for type and annotation systems, the design of static analysis techniques, and work on software quality metrics.
As this work was motivated by a desire to view the heap from a naturalistic viewpoint (where we have an artifact which we wish to measure and understand) instead of an analytic viewpoint (where the heap is a defined by a known formalism) a substantial portion of the talk with be devoted to presenting the resulting measurements. I will, of course, present our interpretation of what these measurements mean wrt. understanding the heap in practice and our thoughts on the potential implications. However, as the goal of this work was to produce quantitative measurements that could be used to form and refute hypothesis about the heap in "practice" I also hope to solicit insights and thoughts from the audience.
Alexey Gotsman, Assistant Research Professor, IMDEA Software Institute
Specifications of concurrent libraries are commonly given by the well-known notion of linearizability. However, to date linearizability has been no more than a box to be ticked in a paper with a new concurrent algorithm, lest it should get rejected. The notion has not even been defined for realistic settings in which concurrent algorithms usually get used! We show that linearizability is more important than that:
This is joint work with Hongseok Yang (University of Oxford, UK).
Aleksandar Nanevski, Assistant Research Professor, IMDEA Software Institute
Most interactive theorem provers provide support for some form of user-customizable proof automation. In a number of popular systems, such as Coq and Isabelle, this automation is achieved primarily through tactics, which are programmed in a separate language from that of the prover's base logic. While tactics are clearly useful in practice, they can be difficult to maintain and compose because, unlike lemmas, their behavior cannot be specified within the expressive type system of the prover itself.
In this talk, I will present a novel approach to proof automation in Coq that allows the user to specify the behavior of custom automated routines in terms of Coq's own type system. The approach involves a sophisticated application of Coq's canonical structures, which generalize Haskell type classes and facilitate a flexible style of dependently-typed logic programming. Specifically, just as Haskell type classes are used to infer the canonical implementation of an overloaded term at a given type, canonical structures can be used to infer the canonical proof of an overloaded lemma for a given instantiation of its parameters. I will also present a few design patterns for canonical structure programming that enable one to carefully and predictably coax Coq's type inference engine into triggering the execution of user-supplied algorithms during unification.