Part of Advances in Neural Information Processing Systems 34 (NeurIPS 2021)
Chenyi Zhang, Tongyang Li
Escaping saddle points is a central research topic in nonconvex optimization. In this paper, we propose a simple gradient-based algorithm such that for a smooth function f:Rn→R, it outputs an ϵ-approximate second-order stationary point in ˜O(logn/ϵ1.75) iterations. Compared to the previous state-of-the-art algorithms by Jin et al. with ˜O(log4n/ϵ2) or ˜O(log6n/ϵ1.75) iterations, our algorithm is polynomially better in terms of logn and matches their complexities in terms of 1/ϵ. For the stochastic setting, our algorithm outputs an ϵ-approximate second-order stationary point in ˜O(log2n/ϵ4) iterations. Technically, our main contribution is an idea of implementing a robust Hessian power method using only gradients, which can find negative curvature near saddle points and achieve the polynomial speedup in logn compared to the perturbed gradient descent methods. Finally, we also perform numerical experiments that support our results.