NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:7360
Title:On Differentially Private Graph Sparsification and Applications

Reviewer 1


		
I have to preface my review by saying that I am not an expert on graphs and I am very unfamiliar with the results in this field. I will assess the paper based mostly on the privacy guarantees provided by the authors. The authors consider the problem of privately finding a sparse representation of a graph.That is find a graph with O(n) edges with a Laplacian close to that of the original graph. Their algorithm is based on computing effective resistances privately and then running a non private algorithm on these resistances to compute the sparsification. I find the motivation for the paper interesting. The proofs seemed to be correct and the algorithms are well explained. My only problem is that the algorithm appears to be intractable. Indeed the algorithm requires solving a SDP which could potentially very hard to solve. The fact that the authors provide no experiments or an implementation of their algorithm leads me to believe that this algorithm is in fact impractical. One minor clarification is: why do we need to sketch the left singular space to obtain the effective resistance? Can't we do that from the first sketch R?

Reviewer 2


		
The paper address privacy in graphs. The authors consider edge privacy and provide a mechanism for sparsification of a graph that ensures differential privacy and approximates the spectrum of the original graph. The sparse graph is then used for cuty queries. The authors use the measure of effective resistance in the probability for sampling an edge. It is still not clear to me that the sampling then does not divulge importance of an edge. Most of the technical discussion is brief and proofs are in the supplementary material. I would have hoped that a more intuitive explanation would make up for the proofs being in the supplementary material, but that is not the case here. There is no motivation relating to practical uses cases. Under what scenarios are private cut queries important? A discussion of the practical impact of privacy in the cases considered here would be helpful.

Reviewer 3


		
The paper is not motivated well. I agree that protecting privacy is important and that differential privacy is a strong and interesting guarantee, but the paper doesn't do a good job in convincing me that private graph sparsification is a relevant topic. How does this paper relate to actual machine learning applications? The authors claim that their approach provides practical algorithms for computing certain properties with better accuracies than previously possible. That's good, but I don't see any actual application and no evaluation of their approach (except for the abstract list in Table 1). While I appreciate the formal approach the authors took in building up their 41 (!) definitions, theorems and lemmas (to be fair, most of those are repetitions of existing formalism), I don't think that NeurIPS is the right venue for the paper. I found the description of their main algorithm (Algorithm 1) very cryptic and hard to follow too: I'm getting confused by the 6 or 7 different G's. We get G as an input, then we construct \hat G by mixing G with the so far undefined K_n (from the text, I think it's a fully connected graph). We create some L and that we only use to compute \tilde \tau_i's that we use to form a diagonal matrix (with some probabilities? Why?) D. We then construct another complete graph H with Gaussian edge weights, combine it with \hat G from earlier to get G'; then we need to solve this SDP-1 formula to get \bar G (explained somewhat in the text in the paragraph about G_int, whatever that is); somewhere in there we have a \tilde G' as well. Finally, we run some algorithm from [47] to get \tilde G. It's very much unclear to me how efficient all of these steps are, particularly solving SDP-1 over all potential graphs \bar G could be very inefficient. Update: I have read the author's response and it sadly did not help me me in understanding the paper and the relevance of its contributions. I apologize to the authors if my critically worded review or my lack of familiarity with their specific area of work has offended them -- merely citing the same lines I tried to decipher, however, does not make for a helpful response, even if done so angrily.

Reviewer 4


		
I am not an expert on the graph sparsification and I did not check the proof. I suggest the weak accept due to the following reasons: 1) The motivation of the problem is unclear. Although it is clear that the original graph sparsification is unsuitable to DP model. The relaxation problem is quite weird. Especially the spectral approximation condition. Why here they need to use a Laplacian of a complete graph, L_n here? What is the connection between this relaxation and the original problem? 2) Still, the motivation of the problem. I am not sure whether it is necessary to study the problem under edge DP. I think the authors should given an example why it is reasonable. 3) My reason of weak accept is due the approach in this paper is quite interesting. As the main idea is to calculate the effective resistance privately. I think it may could found other applications. ---------------------------After Rebuttal---------------------------------------------------- I changed to weak reject due to the following reasons. Definition 2 is quite weird. The authors should clarify why their definition is reasonable. Why here use Ln? I think the authors need to tell a complete story. Not just say something like it can be used to some problems. Since they just considered the edge DP, they need to say that Edge DP is reasonable in these problems. So I changed my opinion to weak reject.