NeurIPS 2020

Escaping Saddle-Point Faster under Interpolation-like Conditions


Meta Review

This paper gives faster convergence guarantees for non-convex stochastic optimization under an assumption called strong growth condition. The reviewers found the theoretical analysis to be novel and interesting. The paper could be improved if it can provide further evidence that SGC is satisfied for objective functions of interest (in particular whether SGC will be satisfied near saddle points and whether that's necessary for the guarantee are unclear).