IMDEA initiative

Home > Events > - Previous Software Seminar Series

Software Seminar Series (S3) - Fall 2015

Damir Valput

Tuesday, December 15, 2015

Damir Valput, PhD Student, IMDEA Software Institute

Oscillation Bounded Behaviors for Pushdown Automata

Abstract:

I will present a measure, called oscillation, for behaviors of pushdown automata which are finite state automata with auxiliary stack storage. The key idea is to map behaviors onto well-parenthesized words. I then partition the class of well-parenthesized words following the nesting structure of well-parenthesized subwords. Given that pushdown automata and context-free grammars recognize context-free languages we can compare oscillation with known measures for parse trees of a context-free grammar. I establish that the notion of dimension for parse trees and the oscillation of runs are in linear relationship.


Germán Delbianco

Friday, December 11, 2015

Germán Delbianco, PhD Student, IMDEA Software Institute

Rewriting History: A Simple Technique for Establishing Linearizability a Posteriori

Abstract:

In this talk, we will present an ongoing effort towards employing a concurrent Separation-style logic (FCSL) for reasoning about fine-grained concurrent data-structures, whose verification traditionally requires a dedicated meta-theory of speculations or, equivalently, prophecy variables. We introduce re-sortable histories as a novel abstraction that can be implemented in FCSL off-the-shelf, without changes to the underlying logical framework, and makes it possible to capture the notion of speculative linearization points in a mechanized concurrency verification framework.

We will illustrate our approach by presenting the first formal specification and proof of Jayanti's wait-free concurrent snapshot construction. Furthermore, we will comment on a few interesting details about its mechanization in the Coq proof assistant and discuss pitfalls and the challenges ahead.


Raúl Alborodo

Thursday, December 3, 2015

Raúl Alborodo, PhD Student, IMDEA Software Institute

A Model-driven Methodology for Generating and Verifying Monitor-based and CSP-based Java Code

Abstract:

Model-driven methodologies can help developers create more reliable software. In previous work, we have advocated a model-driven approach for the analysis and design of concurrent, safety-critical systems. However, to take full advantage of those techniques, they must be supported by code generation schemes for concrete programming languages. Ideally, this translation should be traceable, automated and should support verification of the generated code. In this talk we consider the problem of generating concurrent Java code from high-level interaction models. Our proposal includes an extension of JML that allows to specify shared resources as Java interfaces, and several translation patterns for (partial) generation of Java code. The template thus obtained is JML-annotated Java, either using the JCSP library or Priority Monitors library, with proof obligations that help with both traceability and verification. Finally, we present an experiment in code verification using the KeY tool.


Carmela Troncoso

Friday, November 27, 2015

Carmela Troncoso, Researcher, IMDEA Software Institute

Do dummies pay off? Limits of dummy traffic protection in anonymous communication systems

Abstract:

Anonymous communication systems ensure that correspondence between senders and receivers cannot be inferred with certainty. However, when patterns are persistent, observations from anonymous communication systems enable the reconstruction of user behavioral profiles. Protection against profiling can be enhanced by adding dummy messages, generated by users or by the anonymity provider, to the communication. In this work we study the limits of the protection provided by this countermeasure. We propose an analysis methodology based on solving a least squares problem that permits to characterize the adversary's profiling error with respect to the user behavior, the anonymity provider behavior, and the dummy strategy. Focusing on the particular case of a timed pool mix we show how, given a privacy target, the performance analysis can be used to design optimal dummy strategies to protect this objective.


Pablo Cañones

Tuesday, November 17, 2015

Pablo Cañones, PhD Student, IMDEA Software Institute

Security of Cache Replacement Policies

Abstract:

Cache memories are an important tool for the good performance of computers since they solve the problem of retrieving data always from the main memory, a lot of work has been done over the years to study the performance of different replacement strategies for the data inside caches. However, caches can be a potential and easy objective for side channel attacks which in some circumstances can lead to serious problems of security. The subject of leaked information through caches is still under study and there are not too many important results yet. This talk covers our ongoing work carried out since September about this topic were we present some of the ideas we have explored in this months. We motivate the security dangers of caches and establish the theory for analysing the possible leakage of information: what a realistic cache attacker can do and what a good measure of optimality for probings should be. We present a new algorithm that probes a cache in this optimal way and returns a realistic bound to the information leaked. We apply this theory to the cache memory left by a given program and study the security and vulnerability to probings of the three most common replacement policies: FIFO, LRU and PLRU. We present some real results of the probing of an AES 256 implementation under different sizes of caches and replacement policies and give some initial conclusions of the result we expect to find with further study.


Miguel Ambrona

Tuesday, October 27, 2015

Miguel Ambrona, PhD Student, IMDEA Software Institute

Automated Unbounded Analysis of Cryptographic Constructions in the Generic Group Model

Abstract:

Security in the Generic Group Model is a standard requirement for new hardness assumptions in (bilinear) groups. Recently, the Generic Group Model has also been increasingly used to prove the security of new cryptographic constructions such as algebraic MACs or structure-preserving signature schemes. Since pen-and-paper proofs in the Generic Group Model are usually tedious and hard to write and verify, they might contain errors that go unnoticed. We address this problem by developing a new method to automatically prove security statements in the Generic Group Model as they occur in actual papers. Existing tools like the one by Barthe et al. (CRYPTO'14) and an extended version by Barthe et al. (PKC'15) have severe limitations that prevent them from achieving this goal: The first tool cannot deal with oracles that take handle variables, as required for example for structure preserving signature schemes. The second tool only performs proofs with respect to a bounded number of oracle queries and standard (unbounded) security definitions are therefore out of scope. To lift these limitations, we develop a completely new approach to prove the security of cryptographic constructions in the Generic Group Model. We start by defining (i) a general language to describe security definitions, (ii) a class of logical formulas that characterize how an adversary can win, and (iii) a translation from security definition to such formulas. Our master theorem relates the security of the construction to the existence of a solution for the associated logical formula. Next, we define a constraint solving algorithm that proves the security of a construction by proving the absence of solutions. Finally, we report on the fully automated gga∞ tool that implements the algorithm. We demonstrate the effectiveness and performance of the gga∞ tool on a significant number of examples from the literature.


Antonio Nappa

Thursday, October 15, 2015

Antonio Nappa, PhD Student, IMDEA Software Institute

Active Detection Techniques for Malicious Infrastructure Detection

Abstract:

Cybercriminals use different types of geographically distributed servers to run their operations such as C&C servers for managing their malware, exploit servers to distribute the malware, payment servers for monetization, and redirectors for anonymity. Identifying the server infrastructure used by a cybercrime operation is fundamental for defenders, as it enables take-downs that can disrupt the operation and is a critical step towards identifying the criminals behind it.

In this presentation we will show advances in malicious server infrastructure analysis and detection, solving two fundamental problems: 1. attribution, that is identify which servers belong to which operation and 2. enumeration, that is detection of all the servers of a given operation. We have analyzed more than 500 exploit servers in the wild for a period of 11 months, we collect over time how exploit servers are configured, which exploits they use, and what malware they distribute, grouping servers with similar configurations into operations. Our operational analysis reveals that although individual exploit servers have a median lifetime of 16 h, long-lived operations exist that operate for several months. To sustain long-lived operations, miscreants are turning to the cloud, with 60 % of the exploit servers hosted by specialized cloud hosting services. We also observe operations that distribute multiple malware families and that pay-per-install affiliate programs are managing exploit servers for their affiliates to convert traffic into installations. Furthermore we expand our analysis beyond exploit servers, to this end we have developed different tools codenamed CyberProbe and AutoProbe that are able to scan the internet looking for any kind of malicious servers. We have used CyberProbe and AutoProbe to identify different malicious operations in the wild. Our tools achive 4 times better coverage than existing on-line services that report malicious servers.


Luca Nizzardo

Tuesday, October 6, 2015

Luca Nizzardo, PhD Student, IMDEA Software Institute

Programmable Hash Functions go Private: Constructions and Applications to (Homomorphic) Signatures with Shorter Public Keys

Abstract:

We introduce the notion of asymmetric programmable hash functions (APHFs, for short), which adapts Programmable Hash Functions, introduced by Hofheinz and Kiltz at Crypto 2008, with two main differences. First, an APHF works over bilinear groups, and it is asymmetric in the sense that, while only secretly computable, it admits an isomorphic copy which is publicly computable. Second, in addition to the usual programmability, APHFs may have an alternative property that we call programmable pseudorandomness. In a nutshell, this property states that it is possible to embed a pseudorandom value as part of the function's output, akin to a random oracle. In spite of the apparent limitation of being only secretly computable, APHFs turn out to be surprisingly powerful objects. We show that they can be used to generically implement both regular and linearly-homomorphic signature schemes in a simple and elegant way. More importantly, when instantiating these generic constructions with our concrete realizations of APHFs, we obtain: (1) the first linearly-homomorphic signature (in the standard model) whose public key is sub-linear in both the dataset size and the dimension of the signed vectors; (2) short signatures (in the standard model) whose public key is shorter than those by Hofheinz-Jager-Kiltz from Asiacrypt 2011, and essentially the same as those by Yamada, Hannoka, Kunihiro, (CT-RSA 2012).


Ratan Lal

Tuesday, September 22, 2015

Ratan Lal, PhD Student, IMDEA Software Institute

Bounded error flowpipe computation of parameterized linear systems

Abstract:

We consider the problem of computing a bounded error approximation of the solution over a bounded time [0, T ], of a parameterized linear system, x(t) = Ax(t), where A is constrained by a compact polyhedron Ω. Our method consists of sampling the time domain [0, T ] as well as the parameter space Ω and constructing a continuous piecewise bilinear function which interpolates the solution of the parameterized system at these sample points. More precisely, given an eps > 0, we compute a sampling interval δ > 0, such that the piecewise bilinear function obtained from the sample points is within of the original trajectory. We present experimental results which suggest that our method is scalable.


Software Seminar Series (S3) - Summer 2015