NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:8392
Title:Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness

Reviewer 1


		
Mistakes: References not included in the paper to refer to related works. No references at all. Originality and Significance of paper: The main contribution of this paper is to throw a theoretical perspective on usefullness of variusz regularization methods against designing specialized NN architectures that allow train machine learning models that are robust to transformations and attacks without compromising on natural accuracy on clean test set. Pros: Experimental results support their hypothesis and theoretical conclusions. Without references I am not 100% sure about fair judgement about originality of this work.

Reviewer 2


		
Originality: The questions proposed in the paper are interesting and novel. The solutions and conclusion can have a huge impact on the ML community. Quality: The theoretical result looks pretty solid. But I have some doubts in the experimental section: in Table1, how could the row be “nat”(natural) and the column could be “rob”(adversarial examples)? Maybe one is the setting in training and the other is how you generate the test examples? Clarity: The paper shows many interesting findings, however, it is not easy to get. For example, “ Trade-off natural vs. adversarial accuracy” is very obscure and it takes me some time to figure out what the paragraph is trying to convey. It would be nice to make the paragraph title more consistent and explicit. Significance: The findings can be influential for practical usage.

Reviewer 3


		
This paper tackles an interesting type of adversarial examples different from the classic Lp adversarial examples. In the previous literature, group-equivariant networks have not been extensively evaluated using adversarially chosen, but rather random transformations. This paper provides an interesting angle of spatially Invariance-inducing regularizations, and justify it both theoretically and empirically. Empirically, the paper showed that regularized methods can achieve ∼ 20% relative adversarial error reduction compared to previously proposed augmentation-based methods (including adversarial training). One advantage of the paper is the theoretical analysis of spatial adversarial examples, giving insights as to why regularized augmentation is effective. In addition, the results indicating that regularized training is just as effective as specialized architectures is an insightful result. Besides, its theorem about there is no trade-off in natural accuracy for the transformation robust minimizer is interesting. One major drawback is that there is very little discussion on the empirical results regarding the effectiveness of regularization. Much of the empirical discussion is about training runtime, which isn’t an issue in most cases. Most importantly, the experimental section is hard to follow, and there are not clear takeaways from Table 1 aside from that regularization is better than standard augmentation plus random rotations. A more in-depth analysis of this would be helpful. It is also unclear why larger datasets such as ImageNet were not used in addition to SVHN and Cifar10. These are two highly specialized datasets, so the results may be biased.