NeurIPS 2020

Auditing Differentially Private Machine Learning: How Private is Private SGD?


Review 1

Summary and Contributions: The paper studies the privacy guarantees of private stochastic gradient descent based method. These methods are prevalently used in privacy preserving machine learning in practice. One key idea is to clip the gradient computed at every iteration so that they have bounded norm, and then add Gaussian noise to the clipped gradient. To perform this in practice, often the choice of privacy parameters become very important. It is common to pick epsilon greater than 1 in practice while usually in theory, epsilon is assumed to be smaller than 1. Higher epsilon means less privacy and lower epsilon means higher privacy. However, in practice, there is a large gap between strong formal guarantees promised by DP and implications of choice of the privacy parameter in practice. There has been very few work in this domain, most notably by Jayaram and Evans, and Carlini et al. The current submission improves their result quantitatively by a factor of 10.

Strengths: The current submission uses data poisoning attack that is more robust to gradient clipping. The key observation in the paper is that previous works poisoned the data randomly and it is possible that the effect of these poisoning data would be nullified if they are in the direction where the gradient was large enough (and hence get clipped). The paper uses this observation and finds the direction in which the variance of the gradient is low and launch poisoning attack accordingly. In more details, the feature vector is the singular vector of the lowest singular value of the variance. The label is then the class that maximises the gradient norm. The paper then add random k copies of these poisoned data. I like the central idea in the paper and I believe it makes a significant improvement in the direction that has a lot of practical relevance.

Weaknesses: My biggest concern with this work is that I believe they oversell their result. Their variance based computation of singular vectors (and hence data poisoning attack) relies heavily on the fact that we can have a good understanding of variance, which is model dependence. It is easiest for logistic regression. I suspect that is the reason the paper looked at logistic regression. As the bound before the line 225 is not tight for many other learning task, I doubt that they would have such a large improvement. That is why I feel the paper oversell their result by showing only where they can show quantifiable improvement. I suspect that the central idea needs more fleshing out to generalise it to other class of learning problem. Post rebuttal: I thank the authors for clarifying this misunderstanding on my part. I think they have answered my question to a reasonable degree. My other concern is the audience. This is an attack paper and I feel a much better audience for the paper would be Usenix, CCS, or Oakland. It would have a much wider audience there. Post rebuttal: I still believe that Usenix, CCS, or Oakland would have got more audience for the paper.

Correctness: Yes. Why do we see a dip in the membership inference attack estimated epsilon when provable epsilon is in some range (depending on the dataset being used). Postrebuttal: It is very strange that there is so much variance even running the experiment few times. My intuition was when provable epsilon tends to infinity, the clipBKD should tend to meet the optimal value. I do not see a reason for doing clipping when we are dealing with epsilon = \infty. Does epsilon=\infty means that we do not do any clipping at all? Post rebuttal: The response are fair enough.

Clarity: Yes, it is well written.

Relation to Prior Work: Yes, it is clearly discussed, except that they miss out a result by Carlini et al. (Usenix).

Reproducibility: Yes

Additional Feedback: What does it mean y_p to be the smallest probability class on x_p? What is probability class? Post rebuttal: thanks for the clarification. I have a question to the authors, which I think would help the readers get a better understanding: why do we need to add multiple copies of the same poisoning data. Why not we estimate the least k singular vectors and use different poisoning data corresponding to these vectors. Post rebuttal: thanks for the clarification.


Review 2

Summary and Contributions: This paper performs study on the privacy of DP-SGD in practice. Quantitative and empirical approach are adopted for the study, interesting findings and insights are discussed by evaluation.

Strengths: + This paper works on an interesting and problem. + A new poisoning attack is proposed + An empirical method is proposed for privacy estimation, which could be a complement of analytical DP.

Weaknesses: - This paper mainly focuses on the privacy of SGD. It is unclear whether it generalizes to other cases - The evaluation is only performed on 3 small datasets and small models, unclear whether it can be applicable to practical datasets and models. One claim of this paper is to complement analytical estimation, whether the proposed technique scales in practice is a concern. - In addition, it is unclear either whether the proposed methods support more complicated DNN architectures such as recurrent neural networks and transformers.

Correctness: The claims and methods are feasible, and properly evaluated.

Clarity: The paper is well written and easy to follow

Relation to Prior Work: Largely.

Reproducibility: Yes

Additional Feedback:


Review 3

Summary and Contributions: The paper proposes a data poisoning attack to measuring a lower bound of the epsilon of differential privacy. The attack is clipping-aware and thus would give tighter lower bound than usual data poisoning attack. The paper then presents an experimental evaluation of the attack on common datasets, showing a roughly 10x improvement than previous attacks.

Strengths: The tightness of the DP epsilon in DPSGD has long been a question for practitioners and theory researchers. It is very nice to see an attack that provides a better lower bound.

Weaknesses: I don't see obvious weaknesses. Here are some general questions about the paper: - Is it computationally feasible to try multiclass problems than only binary? If so, would we expect to get better or worse lower bound? - Since DP is a data-independent definition, do you have any comment on what kind of datasets might be easier/harder to attack? - Regarding the effect that decreasing clipping norm would decrease epsilon_LB for random initialization, can that possibly be just an effect of the design of the attack?

Correctness: The claims and method seems correct to me.

Clarity: Yes. The paper is well written.

Relation to Prior Work: Yes.

Reproducibility: Yes

Additional Feedback: