Penalized Langevin dynamics with vanishing penalty for smooth and log-concave targets

Part of Advances in Neural Information Processing Systems 33 (NeurIPS 2020)

AuthorFeedback Bibtex MetaReview Paper Review Supplemental

Authors

Avetik Karagulyan, Arnak Dalalyan

Abstract

We study the problem of sampling from a probability distribution on $\mathbb R^p$ defined via a convex and smooth potential function. We first consider a continuous-time diffusion-type process, termed Penalized Langevin dynamics (PLD), the drift of which is the negative gradient of the potential plus a linear penalty that vanishes when time goes to infinity. An upper bound on the Wasserstein-2 distance between the distribution of the PLD at time $t$ and the target is established. This upper bound highlights the influence of the speed of decay of the penalty on the accuracy of approximation. As a consequence, in the case of low-temperature limit we infer a new result on the convergence of the penalized gradient flow for the optimization problem.