* News


Privacy-Preserving Auditing Algorithms

Nina Mishra
Stanford University

Tuesday, March 22, 2005
JEC 3117 - 4:00 p.m. to 5:00 p.m.
Refreshments at 3:30 p.m.

Organizations now maintain large quantities of personal information. Consequently, there is a growing need to find ways to keep this confidential information private. The need for privacy directly competes with the need to use this data for the discovery of patterns. For example, in a dataset containing the HIV status of patients, we would like to keep private the HIV status of any particular patient but allow the discovery of the total fraction of patients that are HIV+. We describe auditing algorithms that monitor an online stream of queries posed to a dataset and either deny the answer to a query if it breaches privacy or give the true answer if it does not. We uncover a fundamental problem that existing offline auditing algorithms cannot be used to solve the online problem as denials leak information. We then propose a new model of auditing, called simulatable auditing, where denials provably do not leak information. Finally we provide new simulatable auditing algorithms.

Bio sketch: Nina Mishra currently holds a joint appointment as a Senior Research Scientist at HP Labs and as an Acting Faculty member at Stanford University. Her research interests are in the design and analysis of data mining, machine learning and privacy-preserving algorithms. She served as Program Chair for the ICML'03 conference (International Conference on Machine Learning) and has served on numerous data mining and machine learning program committees. She also serves on the Editorial Board of the Machine Learning journal. She earned a PhD in Computer Science from the University of Illinois at Urbana-Champaign.

Last updated: March 7, 2005