NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:8237
Title:Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness


		
Detecting inputs that are outside the distribution of training examples, including adversarial inputs, is an important problem; reviewers and the area chair agree that this paper makes a useful algorithmic contribution towards solving this problem. The argument that reverse KL is conceptually correct, while forward KL as used previously is conceptually wrong, is significant. Training with reverse KL is a simple and compelling idea that practitioners can try easily. For these reasons the paper is being accepted so that the community can benefit from it quickly, despite the fact that reviewers have identified ways in which the writing of the paper, and the empirical evaluation, need improvement. The authors are encouraged to improve the final version.