The newest assistant professor of the IMDEA Software Institute, Kaushik Mallik, comes from Kolkata in India, and is passionate about food, travel, and science and technologies. He studied electrical engineering in his bachelor’s, specialized in control systems in his master’s, and went on to pursue PhD in computer science. His research interests include a diverse set of topics in the areas of formal methods, control systems, and game theory. During his PhD at the Max Planck Institute for Software Systems (MPI-SWS), Germany, he worked on the efficient, verified controller design for cyber-physical systems, which make him win the 2023 ETAPS Doctoral Dissertation Award. During his postdoctoral research at the Institute of Science and Technology Austria (ISTA), Austria, he worked on building monitor for AI decision-makers for checking their decision biases at runtime.
Q: Tell us about your work so far.
A: I primarily work in topics related to the verified software design for cyber-physical systems (CPS). CPS systems are those where computer software needs to interact with dynamical components whose behaviors follow laws of physics. Examples of CPS are now abundant, and includes vacuum cleaning robots, self-driving cars, and power system controllers. A majority of these systems are safety-critical, and it is important that the software components never malfunction. To this end, I develop algorithmic design principles for CPS software that come with formal correctness guarantees. My works from the last seven years have redefined the state-of-the-art in this subject, where I developed techniques that were faster, more modular, and could solve richer classes of problems compared to the existing approaches.
More recently, I ventured into the problem of how to ensure fairness in AI, where we consider AI decision-makers that are used to make decisions about humans, like whether to shortlist a job applicant or not, and it is required that the decisions do not discriminate individuals based on their protected attributes, like gender or race. Such kind of discriminatory behaviors are now well-known in AI, which occur due to historical biases and stereotypes commonly present in training datasets of AI models. We developed runtime monitors which would observe the decisions made by deployed AI decision-makers, and after each new observation, would output how fair or biased the decisions were until the current point in time. Our monitors do not require any knowledge about the monitored system, and therefore can be used as independent third-party fairness watchdogs for AI decision-makers.
Q: What will your group at the IMDEA Software Institute work on in the near future?
A: The current works on formal CPS software design are inherently model-based, meaning their correctness relies on the validity of the assumed mathematical model of the underlying dynamics. In reality, almost always, there are mismatches between the CPS dynamics and their assumed models, and CPS software components must tolerate such modeling errors that may be uncovered during deployment. There is much work on robustifying CPS software against modeling uncertainties, although the existing techniques are often too rigid and require fixed apriori bounds on modeling errors, which is impractical in many real-world use cases. My group will bridge this gap at both theoretical and practical levels.
Q: What attracted you to the IMDEA Software Institute?
A: The IMDEA Software Institute has emerged as one of the top research institutes in computer science in Europe, attracting some of the best scientists of their respective sub-fields and bringing many prestigious grants and honors and industrial partnerships. Besides, Madrid has struck me as a vibrant city with a very high standard of living. Therefore, it was an easy choice.