NeurIPS 2020

Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters


Meta Review

This paper studies the convergence rate and computational complexity of ANIL (a variant of MAML) for cases of strongly-convex and nonconvex inner-loop loss. The paper focuses on an important problem (due to increasing interest in MAML type methods) and it empirically backups its theoretical claims. There were some concerns initially, specially those raised by R4 (providing no insight into improving the existing methods, discrepancy between optimization methods in theoretical analysis and empirical verification). However, authors' response was very helpful and at the end all reviewers agree that the submission is ready for publication. I strongly recommend authors' to incorporate R1's post rebuttal comment in the final version of this work, as it can be an important and yet easy to add component. I am referring to R1's request: " a simple theoretical example, such as a 1-dimensional quadratic objective, could elaborate on the tightness. Alternatively, we could see tightness from a simple empirical example. Nothing of this type was provided by the authors".