
Hana Chockler, Full Professor, King's College London
In this talk I will look at the application of causality to debugging models and explainability. Specifically, I will talk about actual causality as introduced by Halpern and Pearl, and its quantitative extensions. This theory turns out to be extremely useful in various areas of computer science due to a good match between the results it produces and our intuition. It turns out to be particularly useful for explaining the outputs of large AI systems. I will argue that explainability can be viewed as a debugging technique and illustrate this approach with a number of examples. I will discuss the differences between the traditional view of explainability as a human-oriented technique and the type of explainability we are proposing, which is essentially a window inside the (otherwise black-box) system. The talk is reasonably self-contained and does not assume any prior knowledge in AI/ML.
Time and place:
11:00 302-Mountain View and Zoom3 (https://zoom.us/j/3911012202, password:@s3)
IMDEA Software Institute, Campus Montegancedo
28223-Pozuelo de Alarcón, Madrid, Spain

Eleni Straitouri, PhD Student, Max Planck Institute for Software Systems
The remarkable advances in AI have given rise to a growing interest in AI-assisted decision support in domains ranging from medicine and drug-discovery, to criminal justice and education. The ultimate goal in AI-assisted decision support is to optimally combine the complementary strengths of humans and AI models to achieve greater outcomes than either can achieve on their own, in short human-AI complementarity. Achieving this goal though, has shown to be a major challenge as it typically requires humans to understand when they can rely on the AI model for their decision, which has shown to be highly non-trivial.
My research shows that it is possible to circumvent this challenge and achieve human-AI complementarity by proposing an alternative design of decision support systems. The key principle underpinning this design lies on adaptively controlling the level of human agency by using an AI model to narrow down the decisions a human can take to a subset. In this talk, I introduce this design in the context of independent and sequential decision-making tasks, where I present provably data-efficient algorithmic methods to identify the level of human agency under which humans maximize their performance in the decision-making task. Under this optimal level of human agency, my proposed design shows to achieve human-AI complementarity in practice, based on evaluation through two large-scale human studies with a total of more than 4,000 participants. I conclude the talk by discussing challenges and opportunities in human-AI complementarity that open up by AI models generating natural language, highlighting emerging avenues for future research.
Time and place:
10:00 302-Mountain View and Zoom3 (https://zoom.us/j/3911012202, password:@s3)
IMDEA Software Institute, Campus Montegancedo
28223-Pozuelo de Alarcón, Madrid, Spain

Dongwei Xiao, Postdoctoral Researcher, The Hong Kong University of Science and Technology (HKUST)
Privacy-Enhancing Technologies (PETs) are foundational for a future where data can be used without compromising privacy. While the community has largely focused on advancing the cryptographic foundations of PETs, real-world security of PETs is threatened by the very software systems designed to make them accessible, including PET-oriented compilers and frameworks.
The goal of my research is to ensure that the practical systems supporting PETs are dependable. In this talk, I will present my work on developing novel, automated techniques to systematically uncover critical vulnerabilities in the software systems of PETs. I will show two thrusts of my research: (1) automatically discovering severe logic bugs in domain-specific compilers for PETs, and (2) identifying and mitigating new, subtle security risks in PET-enhanced machine learning frameworks. The tools from this research have uncovered dozens of bugs (some with high security impact) in high-stakes PET systems and have been adopted by leading PET industry users. I will conclude by discussing my future research vision towards building provably dependable PET ecosystems.
Time and place:
10:00 302-Mountain View and Zoom3 (https://zoom.us/j/3911012202, password:@s3)
IMDEA Software Institute, Campus Montegancedo
28223-Pozuelo de Alarcón, Madrid, Spain

Lionel Parreaux, Assistant Professor, Hong Kong University of Science and Technology
Pattern matching is a fundamental feature of functional programming languages that is also being adopted by mainstream object-oriented languages, such as Java and Python. In this talk, I will discuss recent improvements and simplifications we proposed for pattern matching syntax. I will also present our ongoing follow-up work: we propose “composable recursive patterns and transformations”, a new way of writing data-oriented code that can be compiled efficiently and integrates well with structural types and subtyping, making writing such code safer, more modular, and more efficient.
Time and place:
11:00 302-Mountain View and Zoom3 (https://zoom.us/j/3911012202, password:@s3)
IMDEA Software Institute, Campus Montegancedo
28223-Pozuelo de Alarcón, Madrid, Spain

Arthur Oliveira, PhD candidate, Yale University
Concurrent and distributed systems are pervasive, yet verifying their correctness remains challenging. A core difficulty is heterogeneity: verification techniques developed for one computational model rarely transfer to others. In this talk, I present compositional linearizability, a framework that reconstructs linearizability through a compositional lens. This perspective yields a general theory from which correctness criteria for specific domains can be systematically derived, as demonstrated in work on crash-aware systems and systems with liveness requirements. The framework leads to novel verification techniques that enable modular reasoning about complex concurrent objects, mechanized in the Rocq proof assistant. These results open promising new paths toward trustworthy distributed systems.
Time and place:
10:00 302-Mountain View and Zoom3 (https://zoom.us/j/3911012202, password:@s3)
IMDEA Software Institute, Campus Montegancedo
28223-Pozuelo de Alarcón, Madrid, Spain