NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:5207
Title:A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

Reviewer 1


		
The paper studies relaxed verifiers and argue that the LP-based verifiers can all be modeled using their convex-relaxation based framework. It then shows that convex-relaxed based classifiers (both based on primal relaxation and dual relaxation) cannot find tight bounds. This is shown empirically by comparing with PGD-based heuristic and MIP-based exact method. The paper is well written and seems to cite related work properly. In terms of methods, it does not seem to propose anything significant however it does a nice job in relating many of the known verifiers which is important as it then illustrates that the bounds they produce are loose.

Reviewer 2


		
This paper addresses a timely topic very well and gives us a better understanding on the quality of layer-wise convex relaxations. It is well-written, convincing, and covers the topics discussed in a lot of detail. The discussions included in the paper are interesting and provide a lot of context, which makes the paper readable to someone with a passing interest in the topic. [ Update after author feedback ] In my opinion the authors have sufficiently responded to the concerns of the reviewers.

Reviewer 3


		
The paper targets the problem of robustness verification of neural networks. This is a very popular and important problem. One of the prominent ways to deal with it is by formulating it as a nonlinear optimization problem and then relaxing its constraints to form a linear program relaxation. These relaxations are not guaranteed to return the optimal value, but they can be solved in polynomial time and provide bounds on the optimal solution. The main contributions of the paper are as follows: 1. Proposing a unified framework that generalizes all known layerwise LP relaxations, and showing their relationship (i.e., which relaxation is tighter). 2. Showing that the performance of all methods within this framework is theoretically limited to the performance of the optimal layerwise relaxation 3. Computing the optimal relaxation for various ReLU networks, comparing it to existing relaxations, and showing that it does not significantly outperform them. The authors conclude that there is a “performance barrier” for layerwise LP relaxations. In general, the paper is well written, and the results seem significant and essential for future work. Additional comments: 1. The figures are in low resolution. Please change to high resolution. 2. Equations 3 - It might be clearer to spread it on more rows. Explaining each constraint in this formulation might help readability. 3. Figure 1: where are the proofs for all the relaxation relationships? Please discuss this or point to the appropriate place in the appendix. I am also missing a discussion on why all these relaxations fall into this framework. ------------------------------------------------------------------------------------------------------- Post-rebuttal ------------------------------------------------------------------------------------------------------- I have read the other reviews and author response. I still feel this is a good submission that will help future work. The additional explanations the authors provided in their response regarding the relationships between the relaxations are important.