Goran Doychev, PhD Student, IMDEA Software Institute
Internet traffic is exposed to potential eavesdroppers. Standard encryption mechanisms do not provide sufficient protection: Features such as packet sizes and numbers remain visible, opening the door to so-called side-channel attacks against web traffic. In this talk, we present a novel framework for deriving formal security guarantees against traffic side-channels. We present a model which captures important characteristics of web traffic, and we define measures of security based on quantitative information flow. To enable the evaluation of real-life web applications, we propose algorithms for the efficient derivation of security guarantees, based on properties of Markov chains. We demonstrate the utility of our techniques in two case studies.
This is joint work with Boris Köpf and Michael Backes.
Juan Manuel Crespo, Post-doctoral Researcher, IMDEA Software Institute
Secure multi-execution (SME) is a dynamic technique to ensure secure information flow. In a nutshell, SME enforces security by running one execution of the program per security level, and by reinterpreting input/output operations w.r.t. their associated security level. SME is sound, in the sense that the execution of a program under SME is non-interfering, and precise, in the sense that for programs that are non-interfering in the usual sense, the semantics of a program under SME coincides with its standard semantics. A further virtue of SME is that its core idea is language-independent; it can be applied to a broad range of languages. A downside of SME is the fact that existing implementation techniques require modifications to the runtime environment, e.g. the browser for Web applications. In this work, we develop an alternative approach where the effect of SME is achieved through program transformation, without modifications to the runtime, thus supporting server-side deployment on the web. We show on an exemplary language with input/output and dynamic code evaluation (modeled after JavaScript's eval) that our transformation is sound and precise. The crux of the proof is a simulation between the execution of the transformed program and the SME execution of the original program. This proof has been machine-checked using the Agda proof assistant. We also report on prototype implementations for a small fragment of Python and a substantial subset of JavaScript. In this talk I will try to explain the intuition of SME and hopefully convince you that its runtime behaviour can be mimicked by statically transforming programs.
Antonio Nappa, PhD Student, IMDEA Software Institute
Drive-by downloads are the preferred distribution vector for many malware families. In the drive-by ecosystem many exploit servers run the same exploit kit and it is a challenge understanding whether the exploit server is part of a larger operation. In this paper we propose a technique to identify exploit servers managed by the same organization. We build an infrastructure to collect over time how exploit servers are configured and what malware they distribute, grouping servers with similar configurations into operations. Our operational analysis reveals that although individual exploit servers have a median lifetime of 19 hours, long-lived operations exist that operate for several months. To sustain long-lived operations miscreants are turning to the cloud, with 60% of the exploit servers hosted by specialized cloud hosting services. We also observe operations that distribute multiple malware families and that pay-per-install affiliate programs are managing exploit servers for their affiliates to convert traffic into installations. To understand how difficult is to take down exploit servers, we analyze the abuse reporting process and issue abuse reports for 19 long-lived servers. We describe the interaction with ISPs and hosting providers and monitor the result of the report. We find that 61% of the reports are not even acknowledged. On average an exploit server still lives for 4.3 days after a report.
Susana Muñoz, Researcher, Technical University of Madrid (UPM), Spain
RFuzzy framework is a Prolog-based tool for representing and reasoning with fuzzy information. The advantages of our framework in comparison to previous tools along this line of research are its easy, user-friendly syntax, and its expressiveness through the availability of default values, types, overload operators, quantifiers, similarity, etc.
In this approach we describe the formal syntax, the operational semantics and the declarative semantics of RFuzzy (based on a lattice). We provide a real implementation that is free and available (It can be downloaded from http://babel.ls.fi.upm.es/software/rfuzzy/.) Besides implementation details, we can also discuss some actual applications using RFuzzy.
The goal of the talk is providing an idea about the simple Rfuzzy syntax to encourage the audience to use it in their research. Former experiences of collaboration were very successful (in results and publications) and future collaborations are welcome.
Raúl Alborodo, Researcher, BABEL, UPM
TACO is a tool for formal verification of programs that helps developers find bugs in early stages. TACO translates annotated Java sources into DynAlloy in order to verify the generated specification using SAT solving as underlying formal method.
The aim of this presentation is to introduce TACO and two extensions developed as part of my MSc thesis. The first one, called "Back2Java", generates automatically a Java witness of a detected bug. The second one implements a modular analysis on top of the DynAlloy code generator.
These extensions are intended to make the tool easier to use by hiding the underlying SAT solving from the user and also to contribute to the scalability of relational modeling, as the modular analysis can alleviate state explosion.
This is joint work with Nazareno Aguirre, Nicolás Ricci and Juan Galeotti.
François Dupressoir, Post-doctoral Researcher, IMDEA Software Institute
Security protocols and APIs are difficult to specify and implement. A reference or prototype implementation written in C is often the most formal specification at hand. In this paper, we show how general-purpose C verifiers can be used to prove computational security properties, including---for the first time---computational indistinguishability, of protocols and APIs written in C. To do so, we rely on the VCC verifier to prove that the C program has the same observable input-output behaviour as a functional reference implementation written in F#. We then use the F7 type-checker to prove perfect security properties of the reference code assuming ideal cryptographic primitives. Finally, we reduce the security of a probabilistic polynomial-time program that uses concrete cryptographic primitives to the security of the same program using ideal cryptography. We illustrate our method on the implementation of an exemplary key management API, inspired by the next version of the TPM standard.
Maria Garcia de la Banda, Associate Research Professor, Monash University, Australia
Lazy Clause Generation is a powerful approach for reducing search in Constraint Programming. It works by recording the reasons that lead to failure (nogoods) and propagating them using SAT technology. Symmetry breaking is also a powerful approach for reducing search that works by ensuring the execution does not explore symmetric parts of the search space.
In this talk, I will first show how we can extend Lazy Clause Generation to break symmetries and how the more precise nogoods generated by a lazy clause solver allow our approach to exploit symmetries that cannot be exploited by any previous symmetry breaking method. Then, I will show how the method can easily be modified to exploit almost symmetries (that is, symmetries that would appear in the problem if a small set of constraints is removed) very effectively. This is significant because they appear in many real world problems and can be exploited to yield significant speedups.
Juan Manuel Crespo, PhD Student, IMDEA Software Institute
Program synthesis offers an effective means to exhaustively explore the design space of a class of algorithms: it can serve as a basis for understanding existing algorithms as points into a well-mapped space and for showing that they are optimal or, on the contrary, for discovering better alternatives. These benefits make synthesis of cryptographic constructions particularly attractive. However, applying synthesis to cryptography has remained challenging, partly because cryptographic constructions are probabilistic algorithms and because their security is formulated as a probabilistic indistinguishability property.
We have developed an automated tool which synthesises provably secure public-key encryption schemes based on trapdoor permutations and hash functions. The tool comprises three independent components: 1) a generation algorithm that synthesises candidate schemes, using simple yet effective symbolic filters that ensure some baseline properties of interest; 2) a checker which verifies that a candidate is IND-CPA and generates a certificate in the form of a reduction with concrete security bounds (independently verifiable in Easycrypt); and 3) a checker which verifies that a candidate is IND-CCA and computes concrete security bounds although not yet a certificate.
The applicability of our techniques is illustrated by generating and analysing the security of over 120,000 schemes. Our tool synthesises (and proves secure) most schemes from the literature, as well many schemes that have not been studied before. In particular, we analyse 120 variants of OAEP, a widely used padding scheme commonly used for strengthening RSA encryption. Our tool proves most secure variants of OAEP and computes security bounds that in many cases match the best known bounds from the literature.
In this talk, I will try to present a high level description of the tool, as well as some details of the first two components.
Diego Esteban Alonso Blas, PhD Student, Universidad Complutense de Madrid, Spain
Static Cost Analysis aims at estimating the amount of resources that an execution of a given program consumes. By resource we mean any quantitative measure such as number of issued instructions, memory space usage, visits to a program point, etc.
The classical approach to cost analysis is based on transforming a given program into a system of cost relations (recurrence relations) and solving them into closed-form upper bounds and lower bounds. It is known that for some programs, the classical approach infers bounds that are asymptotically less precise than the actual cost. It was assumed that this imprecision is merely due to the way cost relations are solved into upper and lower bounds. In this talk we show that this assumption is partially true, and identify the reason due to which cost relations cannot precisely model the cost of such programs.
To overcome this imprecision, we develop the notion of net-cost functions, that model the cost of terminating executions of a program procedure as a function of its inputs and outputs. We show a method to verify upper and lower bounds on the net-cost, using a Satisfiability modulo Tarski theory of real closed fields (real numbers). This method is extended to template-based synthesis of net-cost bounds, by using Quantifier Elimination modulo that same theory.
Juan Diego Campo, Researcher, Universidad de la República, Udelar, Uruguay
Virtualization platforms allow multiple operating systems to run on the same hardware. One of their central goals is to provide strong isolation between guest operating systems; unfortunately, they are often vulnerable to practical side-channel attacks. Cache attacks are a common class of side-channel attacks that use the cache as a side channel.
We formalize an idealized model of virtualization that features the cache and the Translation Lookaside Buffer (TLB), and that provides an abstract treatment of cache-based side-channels. We then use the model for reasoning about cache-based attacks and countermeasures, and for proving that isolation between guest operating systems can be enforced by flushing the cache upon context switch.
In addition, we show that virtualized platforms are transparent, i.e. a guest operating system cannot distinguish whether it executes alone or together with other guest operating systems on the platform.
The models and proofs have been machine-checked in the Coq proof assistant.
Peter Stuckey, Professor, The University of Melbourne, Australia
Two competing approaches to handling complex constraints in satisfaction and optimization problems using SAT and LCG/SMT technology are: decompose the complex constraint into a set of clauses; or (theory) propagate the complex constraint using a standalone algorithm and explain the propagation. Each approach has its benefits. The decomposition approach is prone to an explosion in size to represent the problem, while the propagation approach may require exponentially more search since it does not have access to intermediate literals for explanation. In this paper we show how we can obtain the best of both worlds by lazily decomposing a complex constraint propagator using conflicts to direct it. If intermediate literals are not helpful for conflicts then it will act like the propagation approach, but if they are helpful it will act like the decomposition approach. Experimental results show that it is never much worse than the better of the decomposition and propagation approaches, and sometimes better than both.