Meta-Learning Representations for Continual Learning

Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019)

AuthorFeedback Bibtex MetaReview Metadata Paper Reviews Supplemental

Authors

Khurram Javed, Martha White

Abstract

The reviews had two major concerns: lack of a benchmarking on a complex dataset, and unclear writing. To address these two major issues we: 1- Rewrote experiments section with improved terminology to make the paper more clear. Previously we were using the term Pretraining to refer to both a baseline and the meta-training stage. As the reviewers pointed out, this was confusing. We have replaced one of the usages with 'meta-training.' We have also changed evaluation to meta-testing. 2- Added mini-imagenet experiments to show that the proposed method scales to more complex datasets.

Moreover, it wasn't clear if the objective we introduced improved over a maml like objective that also learned representations. We added MAML-Rep as a baseline that shows that our method -- which minimizes interference in addition to maximizing fast adaptation -- performs noticeably better.

We also added the pseudo-code of the algorithms to the main paper as requested by reviewers. Moreover, we contrast our algorithm with MAML to highlight the difference between the two. We believe that this makes the current version significantly more clear to anyone who already understands the MAML objective.

We have fixed various minor issues in writing and included some missing related work. (bengio2019meta, nagabandi19, al2017continuous) that we have discovered since our initial submission.

Finally, we thank the reviewers and the meta-reviewer for the feedback, which allowed us to improve the work in several aspects.