NeurIPS 2020

Task-Robust Model-Agnostic Meta-Learning


Meta Review

This paper stirred a lot of discussion between reviewers. The reviewers appreciated the new experiments on MiniImagenet. The primary outstanding reviewer concerns were: (a) limited motivation for using a max loss (particularly since, if tasks are of varying difficulty, then the max loss will focus solely on the hardest task rather than taking all tasks into account) (b) the finding that the proposed method is doing better on average case loss than standard MAML despite optimizing worst-case loss. (c) the discrepancy between the standard meta-learning experimental set-up and the one in the paper + author response (e.g. with 8 training tasks and 4 test tasks) I think that none of these reasons are grounds for rejecting the paper. However, I strongly encourage the authors to revise the paper for the camera ready to address these concerns. In particular, to address (a), the authors are encouraged to revise the paper to discuss settings where the max loss may not have the desired effect. To address (b), the authors are encouraged to further analyze this phenomenon and discuss it in the paper. To address (c), the authors are encouraged to include results on the standard set-up in the supplemental material.