**Antonio Nappa**, *PhD Student, Instituto IMDEA Software*

Cybercriminals use different types of geographically distributed servers to run their operations such as C&C servers for managing their malware, exploit servers to distribute the malware, payment servers for monetization, and redirectors for anonymity. Identifying the server infrastructure used by a cybercrime operation is fundamental for defenders, as it enables take-downs that can disrupt the operation and is a critical step towards identifying the criminals behind it.

In this paper, we propose a novel active probing approach for detecting malicious servers and compromised hosts that listen for (and react to) incoming network requests. Our approach sends probes to remote hosts and examines their responses, determining whether the remote hosts are malicious or not. It identifies different malicious server types as well as malware that listens for incoming traffic such as P2P bots. Compared with existing defenses, our active probing approach is fast, cheap, easy to deploy, and achieves Internet scale.

We have implemented our active probing approach in a tool called CyberProbe. We have used CyberProbe to identify 151 malicious servers and 7,881 P2P bots through 24 localized and Internet-wide scans. Of those servers 75% are unknown to publicly available databases of malicious servers, indicating that CyberProbe can achieve up to 4 times better coverage than existing techniques. Our results reveal an important provider locality property: operations hosts an average of 3.2 servers on the same hosting provider to amortize the cost of setting up a relationship with the provider.

**Michael Emmi**, *Researcher, Instituto IMDEA Software*

Observational refinement between software library implementations is key to modular reasoning: knowing that any program behavior using some sophisticated library implementation L is also possible using a simplistic implementation S, allows replacing S for L, thus simplifying reasoning about any module using L. High-performance data structure implementations and their abstract specifications are canonical examples where such refinement is beneficial.

While automating refinement checking between deterministic sequential implementations is relatively straightforward, by ensuring each sequence of operations executed by L is also executable by S, checking refinement between a concurrent implementation L and its (sequential, usually) specification S presents a severe complication: to which linear order σ should concurrently-executed operations of L be resolved, in order to check whether σ is also executable by S? Existing automated approaches either require manual effort in fixing the linearization point of each operation, thus uniquely determining σ, or suffer from exponential explosion in considering all possible linearizations σ.

In this work we develop a fundamentally different approach to automated refinement checking, called operation counting. Rather than reasoning over sequences of executed operations, we reason over counts of executed and executing library operations; we say implementation L refines S when any valuation to the counters which is reachable with L is also reachable with S. We demonstrate that this notion of refinement is not only equivalent to observational refinement, but leads to effective means for automatic checking.

**Dario Fiore**, *Assistant Research Professor, Instituto IMDEA Software*

We address the problem in which a client stores a large amount of data with an untrusted server in such a way that, at any moment, the client can ask the server to compute a function on some portion of its outsourced data. In this scenario, the client must be able to efficiently verify the correctness of the result despite no longer knowing the inputs of the delegated computation, it must be able to keep adding elements to its remote storage, and it does not have to fix in advance (i.e., at data outsourcing time) the functions that it will delegate. Even more ambitiously, clients should be able to verify in time independent of the input-size -- a very appealing property for computations over huge amounts of data.

In this talk I will present novel cryptographic techniques that solve the above problem for the class of computations which can be expressed by arithmetic circuits of bounded degree. In particular I will discuss an efficient solution for the case of quadratic polynomials over a large number of variables. This class covers a wide range of significant arithmetic computations -- notably, many important statistics. To confirm the efficiency of our solution, we show encouraging performance results, e.g., correctness proofs have size below 1 kB and are verifiable by clients in less than 10 milliseconds.

**Goran Doychev**, *PhD Student, Instituto IMDEA Software*

Side-channels can often be easily eliminated, however rarely are, as the cost for this is considerable. In practice, users are left with the task of finding a good trade-off between security and cost. In this work, I present ongoing work on resolving this trade-off, using economic reasoning to aid the decision-making process, and quantitative information-flow analysis to obtain security guarantees for concrete implementations. We model the problem as a game between a user and an adversary, where its solution gives an optimal protection strategy for the user. Moreover, we improve previously known bounds on the probability of successful attacks, which results in tangible economic savings for users seeking formal protection for their systems. In a practical study, we demonstrate how a trade-off between security and cost can be obtained in the example of countermeasures against timing attacks in the OpenSSL RSA implementation.

**Federico Olmedo**, *PhD Student, Instituto IMDEA Software*

The "verified security" methodology is an emerging approach to build high assurance proofs about security properties of computer systems. Computer systems are modeled as probabilistic programs and one relies on rigorous program semantics techniques to prove that they comply with a given security goal. In particular, it advocates the use of interactive or automated theorem provers to build fully formal machine-checked versions of these security proofs.

The verified security methodology has proved successful in modeling and reasoning about several standard security notions in the area of cryptography. However, it has fallen short of covering an important class of approximate, quantitative security notions. This class comprises prominent notions such as indifferentiability, which enables securely replacing an idealized component of system with a concrete implementation, and differential privacy, a notion of privacy-preserving data mining that has received a great deal of attention in the last few years.

In this talk I will present the advances that I have made in my PhD thesis for incorporating the above mentioned class of security notions into the scope of the verifiable security. In particular I will present a quantitative version of a probabilistic relational Hoare logic and demonstrate its utility in doing so.

**Ilya Sergey**, *Post-doctoral Researcher, Instituto IMDEA Software*

Since the mid '80s, compiler writers for functional languages (especially lazy ones) have been writing papers about identifying and exploiting thunks and lambdas that are used only once. However it has proved difficult to achieve both power and simplicity in practice. In my talk, I will describe a new, modular analysis for a higher-order language, which is both simple and effective, and present measurements of its use in a full-scale, state of the art optimising compiler. The analysis finds many single-entry thunks and one-shot lambdas and enables a number of program optimisations.

**Miriam Garcia**, *PhD Student, Instituto IMDEA Software*

It will be presented a general framework for a quantitative predicate abstraction for stability analysis of hybrid systems. Predicate abstraction consists of constructing a finite abstract system from a concrete system, such that the former one simulates the latter system. Simulation relations preserve several discrete-time properties, but for stability preservation some continuity constraints are required. It will be proposed a quantitative version of predicate abstraction and provided a formal connection between the abstract system and the concrete hybrid system by using the continuous simulations. Finally, it will be instantiated the general framework to the class of piecewise linear dynamical systems, and illustrated on an example.

**Benedikt Schmidt**, *Post-doctoral Researcher, Instituto IMDEA Software*

Computer-aided verification provides effective means of analyzing the security of cryptographic primitives. However, it has remained a challenge to achieve fully automated analyses yielding guarantees that hold against computational (rather than symbolic) attacks. This paper meets this challenge for public-key encryption schemes built from trapdoor permutations and hash functions. Using a novel combination of techniques from computational and symbolic cryptography, we present proof systems for analyzing the chosen-plaintext and chosen-ciphertext security of such schemes in the random oracle model. Building on these proof systems, we develop a toolset that bundles together fully automated proof and attack finding algorithms. We use this toolset to build a comprehensive database of encryption schemes that records attacks against insecure schemes, and proofs with concrete bounds for secure ones.

**Francois Dupressoir**, *Post-doctoral Researcher, Instituto IMDEA Software*

We present a computer-aided framework for proving concrete security bounds for cryptographic machine code implementations. The front-end of the framework is an interactive verification tool that extends the EasyCrypt framework to reason about relational properties of C-like programs extended with idealised probabilistic operations, in the style of code-based security proofs. The framework also incorporates an extension of the CompCert certified compiler to support trusted libraries, providing complex arithmetic calculations, or instantiating idealised components such as sampling operations. This certified compiler allows us to carry to executable code the security guarantees established at the high-level, and is also instrumented to detect when compilation may interfere with side-channel countermeasures deployed in source code.

We demonstrate the applicability of the framework with the RSA-OAEP encryption scheme, as standardized in PKCS#1 v2.1. The outcome is a rigorous analysis of the advantage of an adversary to break the security of assembly implementations of the algorithms specified by the standard. The example also provides two contributions of independent interest: it is the first application of computer-aided cryptographic tools to real-world security, and the first application of CompCert to cryptographic software.

**Francesco Alberti**, *PhD Student, Università della Svizzera Italiana, Lugano, Suiza*

Abstraction (in its various forms) is a powerful established technique in model-checking; still, when unbounded data-structures are concerned, it cannot always cope with divergence phenomena in a satisfactory way. Acceleration is an approach which is widely used to avoid divergence, but it has been applied mostly to integer programs. In this talk we will address the problem of accelerating transition relations for unbounded arrays. We will show how to compute accelerations in this domain; then we show how to adapt the so-called "monotonic abstraction" technique to efficiently handle complex formulae with nested quantifiers generated by the acceleration preprocessing. Notably, our technique can be easily plugged-in into abstraction/refinement loops, and strongly contributes to avoid divergence, as shown by experiments conducted with the MCMT model checker.

This is a joint work with S. Ghilardi and N. Sharygina.

**Umer Liqat**, *PhD Student, Instituto IMDEA Software*

Energy consumption analysis of embedded programs requires the analysis of low-level program representations. This is challenging because the gap between the high-level program structure and the low-level energy models needs to be bridged. Here, we describe techniques for recreating the structure of low-level programs and transforming these into Horn clauses in order to make use of the CiaoPP resource analysis framework.

Our analysis framework, which makes use of an energy model we produce for the underlying hardware, characterizes the energy consumption of the program, and returns energy formulae parametrised by the size of the input data. We have performed an initial experimental assessment and obtained encouraging results when comparing the statically inferred formulae to direct energy measurements from the hardware running a set of benchmarks. Static energy estimation has applications in program optimization and enables more energy-awareness in software development.

**Boris Köpf**, *Assistant Research Professor, Instituto IMDEA Software*

Side-channel attacks recover secret inputs to programs from physical characteristics of computations, such as execution time or power consumption. CPU caches are a particularly rich source of side channels because their behavior heavily impacts execution time and can be monitored in various ways.

CacheAudit is a novel platform that enables the automatic, static analysis of such cache side channels; it takes as input a program binary and a cache configuration, and it derives formal, quantitative security guarantees for a comprehensive set of side-channel adversaries, namely those based on observing cache states, traces of hits and misses, and execution times.

In this talk I will present the theoretical foundations and the architecture of CacheAudit, and the results we obtain when analyzing library implementations of symmetric cryptosystems such as AES or Salsa. I will conclude with an outlook on how CacheAudit can be used for engineering certified proofs of security of leakage-resilient cryptosystems on platforms with concurrency and caches.