NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:2252
Title:Approximation Ratios of Graph Neural Networks for Combinatorial Problems

Reviewer 1


		
The article contains 10 theorems which constitute the backbone of the paper. In my opinion, the most important is Theorem 1, which links GNNs and distributed local algorithms. Later approximation results for GNNs are dependent on earlier results obtained for distributed local algorithms. The key technical part of the proof is presented in Lemma 13 in the Appendix that states that every distributed local algorithm can be simulated using a GNN. The novel GNN models (VV_C-GNNs and its subclass CPNGNNs) proposed in the paper can be characterized by the following high-level description: proposed classes of GNNs "can send different messages to neighboring nodes by using port numbering, and this strengthens model capability". In the following theorems, a number of lower and upper approximation bounds are provided. Example: Theorem 7. The optimal approximation ratio of CPNGNNs for the minimum vertex cover problem is 2. In other words, there exists a CPNGNN that can solve 2-approximation for the minimum vertex and for any 1 ≤ α < 2, there exists no CPNGNN that can solve α-approximation for the minimum vertex cover problem. ============== Update: I appreciate the extra information on the availability of examples provided in your response. In my opinion, the choice of Reinforce is quite problematic, but the design of this experiment does not influence my assessment of the paper. I maintain my score.

Reviewer 2


		
The paper proposed a new family of GNNs that are more powerful than the existing ones (e.g., GCN, GAT, GraphSAGE). The notion of powerfulness is defined as the capability for model instances in a model class in solving combinatorial tasks over graphs. The key idea is to leverage port numbering of the graphs, and to take advantage of node coloring features (which seems necessary in order to go beyond trivial approximation ratios). The authors then analyzed the approximation ratio of the proposed CPNGNNs over several canonical combinatorial tasks. Overall, the paper is well written and technically sound, and has offered a unified view of several seminal GNN model classes. The work can be viewed as an extension of Xu et al., 2018 [1] (in which the graph isomorphism task was the only focus). Major concerns: 1. None of the theorems are backed up by empirical results. I believe experiments are crucial to verify the correctness of the theorems, and also to demonstrate the usefulness of CPNGNNs in practice. The authors would probably benefit from having at least one of the following: (1) CPNGNNs lead to better approximation ratio than existing machine learning solvers (e.g. methods in Bello et al., 2016, Vinyals et al., 2015) on synthetic graph combinatorial tasks analyzed in sect 6.2; and (2) CPNGNNs leads to improved/competitive performance on real-world benchmarks as compared to GIN, GCN, GraphSAGE etc. 2. As the authors pointed out, approximation ratios given by Theorem 4 and Theorem 8 are trivial. The improved ratio, i.e., (max_degree + 1) / 2 obtained by leveraging additional graph coloring features, also seems disappointedly loose. While it has been proven in the paper that no GNNs can do better, the paper can be strengthened if the authors could provide more insights/explanations for those unsatisfactory ratios. 3. Definition of "solving" a given combinatorial task seems tricky (L121-122). If my understanding is correct, a GNN model class is considered to be able to solve the task as long as it contains a single model instance that solves the task. However, there doesn’t seem to be a straightforward mechanism to identify the right model instance in the given model class. Minor questions: * Would k-coloring (k > 2) make any difference in the approximation ratio? [1] Xu, Keyulu, et al. "How powerful are graph neural networks?." ICLR 2019.

Reviewer 3


		
Originality: The proposed classes are new but are minor modifications of existing ones. Quality: Technically sound. Clarity: It was easy to follow the argument. But the authors must be clear in the statement of theorems. For example, in Theorem 7, can we use a single choice of parameters to achieve 2-approximation or we have to change parameters depending on the input graph? Significance: As I mentioned above, I'm not sure the main results are very interesting. Why do we want to identify the best approximation ratio we can obtain with a DNN when we know that it won't be better than those of known approximation algorithms? Also, it is not clear how to learn parameters to achieve the claimed approximation ratio. Line 8: As "GNN" is not a well-defined term, it does not make much sense to say "no GNN can perform better than ..."