NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:7778
Title:Inverting Deep Generative models, One layer at a time


		
This theoretical paper studied the invertibility of ReLU networks, as generative priors for denoising or compressive sensing. The invertibility of networks with random weights one layer at a time is also investigated and interesting stability bounds are also provided. Note: comments made by Reviewer #2 should be incorporated for the camera ready version.