**Cesar Kunz**, *Post-doctoral Researcher, Universidad Politécnica de Madrid (UPM), España*

I will be presesting the paper "Discovering Properties about Arrays in Simple Programs" by Nicolas Halbwachs, Mathias Péron, from Verimag (France).

Array bound checking and array dependency analysis (for parallelization) have been widely studied. However, there are much less results about analyzing properties of array contents. In this paper, we propose a way of using abstract interpretation for discovering properties about array contents in some restricted cases: one-dimensional arrays, traversed by simple "for" loops. The basic idea consists in partitioning arrays into symbolic intervals (e.g., [1, i−1], [i, i], [i + 1, n]), and in associating with each such interval I and each array A an abstract variable A_I ; the new idea is to consider relational abstract properties P(A_I , B_I , ...) about these abstract variables, and to interpret such a property pointwise on the interval I: for all i in I, P(A[i], B[i], ...). The abstract semantics of our simple programs according to these abstract properties has been defined and implemented in a prototype tool. The method is able, for instance, to discover that the result of an insertion sort is a sorted array, or that, in an array traversal guarded by a "sentinel", the index stays within the bounds.

**Pablo Nogueira**, *Post-doctoral Researcher, Universidad Politécnica de Madrid (UPM), España*

Due to lack of speaker, I'll try to improvise a short talk about functional programming and the algebra of programming, with examples, from a programming perspective, about equational reasoning, functors, F-algebras, F-coalgebras, monads, etc, and as far as your patience goes.

**Remy Haemmerle**, *Post-doctoral Researcher, Universidad Politécnica de Madrid (UPM), España*

The standard method to prove confluence of term rewriting system (TRS) is based on the study of joinability of critical pairs. This notion of critical pairs obtained by superposing the left-hand sides of rewriting rules have been adapted to a wide variety of rewriting systems, ranging from conditional TRS, higher-order rewriting, graph rewriting and constraint handling rules. However, although there is no general definition of an abstract notion of critical pairs from which the concrete definition could be obtained as particular instances.

In this talk I will first recall some classical abstract properties about confluence. Then I will present an abstract notion of critical pair for arbitrary reduction relations provided with context operators and show how this notion applied to TRS and multiset rewriting.

**Angel Herranz**, *Assistant Professor, Universidad Politécnica de Madrid (UPM), España*

Writing a language processor from scratch is a daunting task; it requires knowledge of specialised techniques and a considerable amount of time and intellectual effort. Parser generators like Yacc and its heirs generate parsers from BNF grammars restricted by the parsing method. The remaining phases of the processor are hardwired in semantic actions hooked on grammar productions which, together with error-recovery stuff, clutter up the grammar and render it unmanageable and unmaintainable.

I'll talk about Generalised Object Normal Form (GONF) and MTP, a syntax formalism and its related tool where grammatical productions simultaneously specify parseable concrete syntax (ie, a language and its parser) and the collection of language-independent data type definitions that represent the AST, which can be materialised without annotations in target languages supporting inheritance or algebraic types and, desirably, parametric polymorphism.

Angel and Pablo.

**Samir Genaim**, *Post-doctoral Researcher, Universidad Politécnica de Madrid (UPM), España*

Classic techniques for proving termination of a given loop require the identification of a measure (ranking function) that maps program states to the elements of a well-founded domain. Termination is guaranteed if this measure is shown to decrease with each iteration of the loop. This is a global termination condition as it requires to decrease for every two consecutive states. For multi-path loops (e.g. loops with if statements) such ranking functions might be complex and difficult to synthesize (lexicographic order, multi-set order, etc). In recent years, systems based on local termination conditions are emerging. Here, termination is guaranteed (under some conditions) if we find a set of (simple) ranking functions, one for each path. In this talk I will present the local approach to proving termination, illustrate its formal justification, and discuss its complexity.

**German Puebla**, *Associate Professor, Universidad Politécnica de Madrid (UPM), España*

Existing algorithms for on-line partial evaluation (PE) of logic programs, given an initial program and a description of run-time queries, deterministically produce a specialized program using a fixed control strategy that is applied to every call pattern being specialized. In this talk I will present "Poly-Controlled Partial Evaluation (PCPE)", a powerful approach to partial evaluation which allows using different control strategies for different call patterns. Thus, rather than a fixed PE specialized program, it can obtain a set of different PCPE specialized programs. The quality of each of these programs is assessed through the use of a fitness function, which can be resource aware, in the sense that it can take multiple factors into account, such as run-time, memory consumption, and code size of the specialized programs.

This is joint work with Claudio Ochoa.

**Julio Mariño**, *Assistant Professor, Universidad Politécnica de Madrid (UPM), España*

Originating from the work of De Morgan (150 years ago!), CSS Peirce and Schröder, and getting its present equational form from the contributions of Tarski in the 1940s, the calculus of binary relations remained a mathematical curiosity until some researchers started to draw connections with various areas of mathematics and computer science.

I will present the calculus, its chronology and short examples illustrating current application areas including ongoing research taking place in our group after the introduction of the subject by Jim Lipton.

**Alfonso Acosta**, *Post-doctoral Researcher, Universidad Politécnica de Madrid (UPM), España*

Type arithmetic (or type-level computations) are calculations over types.

Haskell lacks type-level lambdas and thus, one could say it also lacks native support for type arithmetic. However, it is possible to use multiparameter type classes (originally devised to support multiparameter function overloading) to implement type-level computations.

Some inmediate applications of type-level programming in Haskell are:

- Heterogeneous lists and extensible records.
- Emulation of dependent types, e.g. statically-safe vectors, in which the vector bounds can be checked at compile-time.

During the talk we will go through a simplified implementation of type-level Booleans and Naturals.

**Nik Swoboda**, *Researcher, Universidad Politécnica de Madrid (UPM), España*

In this presentation I will start by giving a brief introduction to the field of Diagrammatic Reasoning. Then I will present a summary of the book "Mathematical Reasoning with Diagrams: From Intuition to Automation" by Mateja Jamnik. From the cover of the book:

Theorems in automated theorem proving are usually proved by formal logical proofs. However, there is a subset of problems that humans can prove by the use of geometric operations on diagrams, so-called diagrammatic proofs. This book investigates and describes how such diagrammatic reasoning about mathematical theorems can be automated.

**Damiano Zanardini**, *Post-doctoral Researcher, Universidad Politécnica de Madrid (UPM), España*

As two Manuels recently said, Wikipedia pages are never as precise as you want. Yet, probably, most of Wikipedia accesses actually look for one-line information! (e.g., the birth year of someone) The same for slicing: you may prefer conciseness to soundness, if what you are looking for is only some help in debugging... Thin slicing tries to do this by ignoring some information about the program structure: you may not find the bug, but most of the times you go straight to the point.

I will talk about a PLDI 2007 paper (not mine, sigh) I liked.

**Manuel Clavel**, *Associate Research Professor, Instituto IMDEA Software*

In this talk we will report on our experience on using the so-called
model-driven security approach in an MDA industrial project. In
model-driven security, *designers specify system models along with
their security requirements and use tools to automatically generate
system architectures from the models.* During the talk, we will
discuss the languages that we used to model both the functional and
the security system's requirements, and we will describe the
transformation function that we developed to build from the
security-design models the system's access control infrastructure.

The talk will conclude with the lessons that we learned about the feasibility and practical industrial relevance of the model-driven security approach, and about the opportunities/challenges that it offers/presents for the application of formal methods in industrial projects.

**Manuel Carro**, *Associate Professor, Universidad Politécnica de Madrid (UPM), España*

Garbage collection (GC) is a form of automatic memory management. The garbage collector, or just collector, attempts to reclaim garbage, or memory used by objects that will never be accessed or mutated again by the application. Garbage collection was invented by John McCarthy around 1959 to solve the problems of manual memory management in his Lisp programming language.

Garbage collection is often portrayed as the opposite of manual memory management, which requires the programmer to specify which objects to deallocate and return to the memory system. However, many systems use a combination of the two approaches, and there are other techniques being studied (such as region inference) to solve the same fundamental problem

NOTES FROM THE ORGANIZER:

PS: The speaker has already declared some disagreement with the wikipedia description of garbage collection, but this is the best that I [Cesar] could cook in just a few seconds, and - you know - I enjoy being controversial :)

PS2: Considering the topic of the talk, we should leave the room squeaky-clean.

PS3: With a bit of luck we will be enjoying a new whiteboard, courtesy of IMDEA Software.

**Pablo Nogueira**, *Post-doctoral Researcher, Universidad Politécnica de Madrid (UPM), España*

Wadler coined the name Expression Problem for the problem of extending data and functions on that data in modular and type-safe fashion. I have coined the term "Twisted" Expression Problem to the problem of adding to the Expression Problem (sic) the possibility of typing data at the meta-level (by the compiler's type-checker, not a programmer-defined type checker program). I'll explain what the previous convoluted and circuitous phrase means. Just want to sound professional here.

Mainstream object-oriented languages unsuccessfully tackle the Expression Problem using inheritance. Bah, mostly rubbish (op. cit. Emilio), so won't go there much.

Type classes are a nice language mechanism for principled overloading. Surprise: they can be used to represent data in a Church-encoding-like fashion. Surprise: they are modularly and safely extensible, and can be used to solve the Twisted Expression Problem. Surprise: the visitor pattern is the functional homecoming for OO programmers, and guess what, there is a connection to the type class solution. Which? You'll have to find out yourself.

WARNING: I will make use of inappropriate and impolite language such as "type", "static type-checking", "safety", "extensibility", and (I know, despicable), functional programming. Apologies for any inconvenience.

PS: To add insult to injury, if time permits I will argue that the best solution would be provided by extensible dependent types, which currently do not exist. Anyone interested?

**Cesar Sanchez**, *Assistant Research Professor, Instituto IMDEA Software*

I will talk about regular linear temporal logic (RLTL), a logic that generalizes linear temporal logic with the ability to use regular expressions arbitrarily as sub-expressions.

The main result about RLT is that every LTL operator can be defined as a context in regular linear temporal logic. This implies that there is a (linear) translation from LTL to RLTL. Unlike LTL, regular linear temporal logic can define all w-regular languages, while still keeping the satisfiability problem in PSPACE. Unlike the extended temporal logics ETL_*, RLTL is defined with an algebraic signature. In contrast to the linear time mu-Calculus, RLTL does not depend on fix-points in its syntax.

I will try to follow a very "pedagogic" style, elaborating on the concepts of verification, linear temporal logic, PSPACE complexity, verification, etc... as much as demanded. No previous knowledge on these topics beyond the basic level of theoretical computer science is assumed.

This is joint work with Martin Leucker, from T.U. Munich.