NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:6368
Title:Theoretical evidence for adversarial robustness through randomization


		
The paper makes a contribution in theoretical analysis of randomization as a defence against model evasion attacks. All reviewers view the submission weakly positively, but note a number of potential improvements. Despite these minor weaknesses, the contribution appears useful enough contribution to warrant acceptance. For the final version, I would strongly urge the authors to consider the suggestions given by the reviewers. It is especially important to make absolutely clear the distinction between certified guarantees and the probabilistic guarantees obtained here, and avoid using terminology that might confuse these.