Maximal adversarial perturbations for obfuscation (Work done at Ganaka Lab) Maximal adversarial perturbations for obfuscation (Work done at Ganaka Lab)
Adversarial perturbations for privacy from human perception and model (machine) based detection
Healthcare machine learning
We employ adversarial perturbations for obfuscating certain variables in raw data while preserving the rest. Current adversarial perturbation methods are used for data poisoning with minimal perturbations of the raw data such that the machine learning model’s performance is adversely impacted while the human vision cannot perceive the difference in the poisoned dataset due to minimal nature of perturbations. We instead apply relatively maximal perturbations of raw data to conditionally damage model’s classification of one attribute while preserving the model performance over another attribute. In addition, the maximal nature of perturbation helps adversely impact human perception in classifying hidden attribute apart from impacting model performance. We validate our result qualitatively by showing the obfuscated dataset and quantitatively by showing the inability of models trained on clean datato predict the hidden attribute from the perturbed dataset while being able to predict the rest of attributes. Such perturbations are necessitated in sectors like healthcare due to privacy, fairness, ethical and regulatory issues.
Principle Investigators
Team Members