NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:8389
Title:Deep Leakage from Gradients


		
The paper presents an attack against federated learning algorithms and shows that when certain conditions apply, it may be possible to reconstruct the raw data from the gradients. This is an interesting observation. Federated learning, despite not having any formal privacy guarantees, is gaining popularity in corporates that operate on large amounts of data. In some cases, it might be used under the assumption that it provides privacy. Therefore, showing that this feeling is wrong may have real world impact. At the same time, the attack presented here may not be plausible in scenarios where gradients are batched. This is a borderline case, but I am leaning towards accepting since the results presented here may have real world impact on choices of large corporates.