Processing math: 100%

Learning the optimal Tikhonov regularizer for inverse problems

Part of Advances in Neural Information Processing Systems 34 (NeurIPS 2021)

Bibtex Paper Reviews And Public Comment » Supplemental

Authors

Giovanni S Alberti, Ernesto De Vito, Matti Lassas, Luca Ratti, Matteo Santacesaria

Abstract

In this work, we consider the linear inverse problem y=Ax+ε, where A:XY is a known linear operator between the separable Hilbert spaces X and Y, x is a random variable in X and ϵ is a zero-mean random process in Y. This setting covers several inverse problems in imaging including denoising, deblurring, and X-ray tomography. Within the classical framework of regularization, we focus on the case where the regularization functional is not given a priori, but learned from data. Our first result is a characterization of the optimal generalized Tikhonov regularizer, with respect to the mean squared error. We find that it is completely independent of the forward operator A and depends only on the mean and covariance of x.Then, we consider the problem of learning the regularizer from a finite training set in two different frameworks: one supervised, based on samples of both x and y, and one unsupervised, based only on samples of x. In both cases, we prove generalization bounds, under some weak assumptions on the distribution of x and ε, including the case of sub-Gaussian variables. Our bounds hold in infinite-dimensional spaces, thereby showing that finer and finer discretizations do not make this learning problem harder. The results are validated through numerical simulations.