IMDEA Software

Iniciativa IMDEA

Inicio > Eventos > Charlas Invitadas > 2021 > Outside the Box: Scalable Formal Methods
Esta página aún no ha sido traducida. A continuación se muestra la página en inglés.

Christian Schilling

lunes 1 de febrero de 2021

11:00am Zoom3 - https://zoom.us/j/3911012202 (pass: s3)

Christian Schilling, Interim Professor, University of Konstanz, Germany

Outside the Box: Scalable Formal Methods

Abstract:

I will talk about two fairly different but representative lines of my work. The first line of work is about reachability analysis of linear dynamical systems. That problem is considered solved from a theoretical point of view because arbitrary-precision algorithms using efficient set representations that scale to hundreds of dimensions are available. However, in many cases we are not interested in high precision but instead want a fast analysis result or need to deal with systems that have thousands of dimensions, to which even the best set representations do not scale. Fortunately, linear-algebra packages can easily deal with matrices of that order. We present a reachability approach that partially decomposes the system into low-dimensional subsystems to perform the set computations in low dimensions, which yields a dramatic increase in scalability. At the same time, our approach still performs the matrix computations in high dimensions, which preserves precision in practice. The second line of work is about runtime monitoring of neural networks. Since it is often impossible to specify the correct behavior of neural networks, they cannot be statically verified. Furthermore, due to their black-box nature, it is hard to understand when neural networks “go wrong.” We focus on the general problem of detecting when a neural network operates outside its comfort range that it was trained and validated for. To address this “anomaly detection” problem, we equip a neural network with a monitor that observes the behavior at runtime. This behavior is then compared to an internal model of “normal” behavior that was obtained during training. As a choice for this internal model we propose an abstraction of the hidden feature layers. Abstractions provide us with both a fast answer at runtime and a one-sided guarantee: if the behavior is flagged as novel, then it was indeed not observed during training.