Thomas Humphries, PhD candidate, University of Waterloo
There is no one-size-fits-all solution to preserving privacy in artificial intelligence and data science. While insights derived from sensitive data can benefit society, privacy is typically at odds with utility, performance, usability, or some combination of these objectives. Furthermore, each system differs in its definition of these objectives and the way they interact with one another. If the cost in any one of these objectives is too high, the system will not be deployed, or worse, deployed with a weakened privacy guarantee, exposing users to potential harm. In this talk, I will present my work on addressing this challenge from multiple angles. First, through strategic algorithm design, my work creates systems with improved trade-offs, enabling their deployment. Second, my work audits private systems to show the privacy risks associated with misleading privacy claims. This talk expands on the design of differentially private evolutionary algorithms, the construction of a novel multi-party computation protocol, and the mismatch between privacy expectations and reality in machine learning.