IMDEA Software

IMDEA initiative

Home > Events > - Previous Software Seminar Series

Software Seminar Series (S3) - Spring 2014

Miriam Garcia

Tuesday, June 17, 2014

Miriam Garcia, PhD Student, IMDEA Software Institute

Counterexample guided abstraction refinement for stability analysis of hybrid systems

Abstract:

Hybrid systems refer to dynamical systems exhibiting a mixed discrete and continuous behaviours. Such behaviors appear naturally in embedded control systems, where one or more embedded processors executing discrete behaviors interacts with a continuous physical world. Stability is a fundamental property in control system design, which assures that small perturbations in the initial state or input to the system result in only small variations in the future behaviour. The classical methods for stability analysis rely on exhibiting a particular kind of function, called Lyapunov function. There exist automated methods for computing Lyapunov functions, which choose a template with certain parameters as a candidate Lyapunov function, and use certain constraint solvers to search for the parameters. However, the main difficulty with these template based methods is that choosing the right template requires user ingenuity. Further, a failed search for parameters does not provided any insights into the potential reasons for instability or guide the choice of subsequent templates.

We propose an alternate approach for stability analysis of hybrid systems, namely,a counterexample guided abstraction refinement (CEGAR) framework. Here, an abstraction of the original (concrete) system is constructed and model-checked,which either results in a stability proof or an abstract counter-example indicating a potential reason for instability.In the latter case, the abstract counter-example is analysed to either infer a bug in the concrete system or construct a finer abstraction.This approach addresses the shortcomings of the template based search through a counter-example guided refinement.

Our CEGAR algorithm has been implemented in the tool AVERIST, and we report experimental evaluations on some examples for demonstrating the feasibility of the approach.


Julian Samborski-Forlese

Tuesday, June 10, 2014

Julian Samborski-Forlese, PhD Student, IMDEA Software Institute

Bounded Model Checking of Regular Linear Temporal Logic

Abstract:

Bounded model checking (BMC) is an effective algorithmic method for the verification of finite state systems against temporal specifications expressed in some linear temporal logic, typically LTL. The basis of BMC consists on encoding the existence of a counterexample trace of bounded description into a satisfiability formula, and proceed incrementally increasing the bound if a counterexample is not found.

In spite of its wide acceptance, LTL has limited expressivity, so some important ω-regular properties cannot be expressed in LTL. RLTL is a temporal logic that overcome this expressivity limitation by fusing LTL with regular expressions, extending the expressive power of LTL to all ω-regular languages.

In this talk, I will present the first study on bounded model checking RLTL specifications by providing two semantic translations from RLTL into SAT: one based on non-deterministic Bühi automata, and a more efficient one based on alternating hesitant automata, in both cases with a symbolic transition representation. Both encodings require a number of variables linear in the number of states, and generate SAT formulas of linear size with respect to the unrolling bound. This implies that the SAT formula generated from the alternating automaton is exponentially more succinct than the classical non-determinsitic construction.


Nataliia Stulova

Tuesday, May 27, 2014

Nataliia Stulova, PhD Student, IMDEA Software Institute

Assertion-based Debugging of Higher-Order (C)LP Programs

Abstract:

Higher-order constructs extend the expressiveness of first-order (Constraint) Logic Programming ((C)LP) both syntactically and semantically. At the same time assertions have been in use for some time in (C)LP systems helping programmers detect errors and validate programs. However, these assertion-based extensions to (C)LP have not been integrated well with higher-order to date. This paper contributes to filling this gap by extending the assertion-based approach to error detection and program validation to the higher-order context within (C)LP. We propose an extension of properties and assertions as used in (C)LP in order to be able to fully describe arguments that are predicates. The extension makes the full power of the assertion language available when describing higher-order arguments. We provide syntax and semantics for (higher-order) properties and assertions, as well as for programs which contain such assertions, including the notions of error and partial correctness. We also discuss several alternatives for performing run-time checking of such programs.


Ilya Sergey

Tuesday, May 13, 2014

Ilya Sergey, Post-doctoral Researcher, IMDEA Software Institute

Hoare-Style Reasoning about Non-Blocking Concurrent Algorithms

Abstract:

How would one specify a correct behaviour of an optimistic fine-grained concurrent data structure? For more than twenty five years Herlihy and Wing's notion of linearizability served as a golden standard of such a correctness criteria for concurrent objects.

However, even though linearizability works well when checking whether all operations of an object can be seen as *atomic ones*, it does not answer the question what *stable* specifications should be given to these operations, so the clients could reason about them in the style of Hoare logic, in the presence of interference. Neither does it answer, which specifications can be thought of as *canonical* ones.

In my talk, I will present an ongoing work in progress, which makes a tentative attempt to provide canonical Hoare-style specifications to operations of a series of non-blocking data structures. The key idea of our approach is instantiation of the recently developed framework of Fine-Grained Concurrent Separation Logic with a novel sort of resources: subjective histories. By considering object histories as timestamp-indexed series of "atomic deltas", we provide a simple and principled way to specify and prove *granularity abstraction*: sophisticated non-blocking algorithms are given the same specifications as their simplified coarse-grained counterparts, which makes the former ones as convenient as the later ones for the client-side reasoning.


Burcu Kulahcioglu Ozkan

Tuesday, May 6, 2014

Burcu Kulahcioglu Ozkan, PhD Student, Koç University

Exploiting Synchronization in the Analysis of Shared-Memory Asynchronous Programs

Abstract:

As asynchronous programming becomes more mainstream, program analyses capable of automatically uncovering programming errors are increasingly in demand. Since asynchronous program analysis is computationally costly, current approaches sacrifice completeness and focus on limited sets of asynchronous task schedules that are likely to expose programming errors. These approaches are based on parameterized task schedulers, each which admits schedules which are variations of a default deterministic schedule. By increasing the parameter value, a larger variety of schedules is explored, at a higher cost. The efficacy of these approaches depends largely on the default deterministic scheduler on which varying schedules are fashioned.

We find that the limited exploration of asynchronous program behaviors can be made more efficient by designing parameterized schedulers which better match the inherent ordering of program events, e.g., arising from waiting for an asynchronous task to complete. We follow a reduction-based "sequentialization" approach to analyzing asynchronous programs, which leverages existing (sequential) program analysis tools by encoding asynchronous program executions, according to a particular scheduler, as the executions of a sequential program. Analysis based on our new scheduler comes at no greater computational cost, and provides strictly greater behavioral coverage than analysis based on existing parameterized schedulers; we validate these claims both conceptually, with complexity and behavioral-inclusion arguments, and empirically, by discovering actual reported bugs faster with smaller parameter values.


Zorana Bankovic

Tuesday, April 29, 2014

Zorana Bankovic, Post-doctoral Researcher, IMDEA Software Institute

Stochastic vs. Deterministic Scheduling and Allocation. Case study: Optimal Energy-aware Scheduling for XMOS Chips

Abstract:

The most common approach for solving the problem of optimal task scheduling is to use expected values of the variables that the function to be optimized depends on, e.g., execution time or energy consumption, in which case we refer to the problem as the deterministic scheduling. However, the execution time of a task in reality can vary considerably, due to a number of reasons, e.g., unknown memory access time, operating system effects that cannot be known in advance, etc. For this reason, it is more accurate to treat execution time, as well as energy consumption, which is closely related, as a random variable with a corresponding probability density and/or cumulative distribution function. We refer to this group of problems as stochastic scheduling problems. The state of the art results of optimal scheduling for makespan optimization prove that in certain situations the deterministic scheduler provides results that significantly deviate from the optimal ones, and that better results can be obtained using stochastic scheduling. In our work we have proven that this is also the case for energy consumption optimization. Our objective is to optimize the energy consumption through scheduling and allocation of a set of tasks running on multiprocessor/multicore and multithreaded voltage and frequency scalable architectures designed by XMOS. In the work carried out so far we have assumed that the execution time and energy consumption of different tasks are independent. However, in reality these values are usually dependent. For this reason, we are currently studying a particular way to model dependence based on copula theory. In this talk we will present the designed scheduler based on multiobjective evolutionary algorithm and also give a short introduction to copula theory and mention the results obtained so far.


Andrea Cerone

Tuesday, April 8, 2014

Andrea Cerone, Post-doctoral Researcher, IMDEA Software Institute

Parametrised Linearisability

Abstract:

Many concurrent libraries are parameterised, meaning that they implement generic algorithms that take another library as a parameter. In such cases, the standard way of stating the correctness of concurrent libraries via linearisability is inapplicable. We generalise linearisability to parameterised libraries and investigate subtle trade-offs between the assumptions that such libraries can make about their environment and the conditions that linearisability has to impose on different types of interactions with it. We prove that the resulting parameterised linearisability is closed under instantiating parameter libraries and composing several non-interacting libraries, and furthermore implies observational refinement. These results allow modularising the reasoning about concurrent programs using parameterised libraries and confirm the appropriateness of the proposed definitions. We illustrate the applicability of our results by proving the correctness of a parameterised library implementing flat combining.


Tuesday, February 18, 2014

Marek Zawirski, PhD Student, INRIA

From Large-Scale to Extreme-Scale : Extending Causal+ Consistency to Client Nodes

Abstract:

Modern web applications are backed by geo-replicated data stores that offer highly available and low latency data access by replicating and updating data at a data center close to the client. Some applications require even greater availability and responsiveness, and cache and update data directly in mobile device or web browser. We propose to integrate client-side replication into the geo-replicated store in order to support an arbitrarily large database, and to ensure consistency and availability. Such design goals translate into a difficult set of somewhat conflicting requirements. (i) Consistency: To ensure Causal+ Consistency (the strongest consistency model for highly available systems), all the way to the client. (ii) Scalability: To replicate only a small part of the database and metadata at a client, and to keep metadata size bounded and independent of extreme numbers of widely-spread clients. (iii) Fault-tolerance: To remain available despite network, data center and client failures.

We present the design of SwiftCloud, the first distributed data store that satisfy this combination of requirements. A client replicates only the data and metadata of interest to him. The consistency protocol ensures causally-consistency access, even when client switches between data centers, by trading-off data freshness for safety. SwiftCloud keeps metadata small thanks to appropriate separation of concerns: by using small vector clocks for causality, and scalar timestamps for uniqueness. This allows to prune update logs safely even when some client replicas are unavailable.


Software Seminar Series (S3) - Fall 2013