NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:2447
Title:Uncertainty-based Continual Learning with Adaptive Regularization


		
This paper proposed uncertainty-regularized continue learning (UCL) to address the challenge of catastrophe forgetting of neural networks. In detail, the method improves over variational continual learning (VCL) by modifying the KL regularizer in mean-field Gaussian prior/posterior setting. The approach is mainly justified by intuition explanation rather than theoretical/mathematical arguments. Experiments are performed on supervised continual learning benchmarks (split and permuted MNIST), and the method shows dominating performance over previous baselines (VCL, SI, EWC, HAT). Reviewers include experts in continual learning. Some of them were concerned on the MNIST benchmark, but with additional supervised learning and RL experiments provided in author feedback, reviewers reached a consensus of accepting this paper. Although UCL is an improvement over VCL, two continual learning experts in the reviewing panel viewed this modification as novel contribution. I would suggest the following revision In the camera ready: 1. A clear discussion on the novelty of UCL over VCL; 2. A better justification of the UCL objective, preferably with some detailed derivation; 3. Adding in the RL experiments to make the paper stronger.