NeurIPS 2020

Constant-Expansion Suffices for Compressed Sensing with Generative Priors


Meta Review

In compressed sensing with a random multilayer ReLU neural network as prior, this paper shows that constant expansivity of the weight matrices of the neural network, as opposed to the "strong" expansivity (i.e., with a logarithmic factor) in existing studies, suffices for the existence of a gradient-descent based algorithm with a theoretical recovery guarantee (Theorem 1.1). To prove it, this paper introduced and utilized the novel notion of pseudo-Lipschitzness (Definition 4.2). This paper furthermore succeeded in obtaining several generalizations of Theorem 1.1, as stated informally in Theorem 1.2. The three reviewers rated this paper well above the acceptance threshold. They also agreed that the proof technique developed in this paper will have wider applicability, as well as that this paper is very clearly written. Two of them acknowledged that there is no significant weakness. I am therefore glad to recommend acceptance of this paper.