NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:1057
Title:Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training


		
After the rebuttal, all the reviewers agree that the paper is interesting and should be accepted. However, each of them have recommendations how to improve the paper, so the authors are encouraged to make these changes. This includes, among others, open sourcing the code and the models, correcting the results corresponding to black-box attacks, running experiments using additional gradient free attacks, corrections to the multiple restarts settings, and further ablation studies. Furthermore, during the discussions there was an agreement that Theorem 1 is meaningless and should be removed: since the concept "coupled" is not defined formally, one can pick any target and choose a penalty R=target-average loss. Also, any uncoupled regularizer can be made coupled by adding an arbitrarily small coupled term, which would not affect the behavior of the method. Therefore, the authors are encouraged to remove the current (almost) formal theorem statement and rather explain (perhaps as a theorem) how the coupling happens.