Do Gradient Inversion Attacks Make Federated Learning Unsafe?

Presenter

Shuhei Fujita

Abstract

Federated learning (FL) allows the collaborative training of AI models without needing to share raw data. This capability makes it especially interesting for healthcare applications where patient and data privacy is of utmost concern. However, recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data. In this talk, I will present the work of Hatamizadeh et al., where the authors show that these attacks presented in the literature are impractical in FL use-cases where the clients’ training involves updating the Batch Normalization (BN) statistics and provide a new baseline attack that works for such scenarios. Furthermore, the work proposes new ways to measure and visualize potential data leakage in FL. This provides a step towards establishing reproducible methods of measuring data leakage in FL and could help determine the optimal tradeoffs between privacy-preserving techniques, such as differential privacy, and model accuracy based on quantifiable metrics.

Refs