NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:238
Title:Invert to Learn to Invert

Reviewer 1


		
The paper seems to propose a new (original) method, which is significant in that it enables more complex inversion problems. The paper is generally well-written (clarity), although it is quite dense for a reader not intimately familiar with the subject matter. The authors make an attempt to include information from other papers that a reader may not be familiar with. This does not always succeed: In Eq. (6), it is not clear how this layer can be trivially inverted analytically, unless function G() is trivially invertible analytically. For 2.3, it is not obvious how a layer containing of ReLUs, or downsampling at large, is invertible. This section should be expanded notably. I have a concern in quality. The paper's results validate the method on the authors' chosen task. However, the paper's underlying tone. both by the rather general title and the abstract/conclusion, is that the method is generally applicable. However, it is not clear how specialized MRI really is, or what kind of problem classes this can be applied to. To uphold the claims of generality, the authors should speak to this, or alternatively, change the title to be more specific. I personally dislike papers with very general titles that then apply only to a very narrow application area. In 4.4, it is claimed that the machine state occupies over 7% of available GPU RAM, and therefore it is not trainable with current hardware. However, there is no reason one could not page memory between GPU and CPU. Would that really make things infeasibly slow, e.g. if properly interleaved memory copies are used?

Reviewer 2


		
Originality This paper has an originality. The invertible recurrent inference machine (i-RIM) is a new structure. Quality The proposed method i-RIM outperforms all conventional methods. In particular, the i-RIM shows very nice results on the MRI reconstruction task. For instance, the i-RIM sits on top of the leader board of fastMRI challenge. Clarity The authors need to do a better job in their writing and motivating the problem. It is not easy to read. The text in Figure 2 is very hard to read. Please increase the font size. Significance This paper contains a new structure which shows very nice results in MRI reconstruction tasks.

Reviewer 3


		
This paper introduces a memory efficient method to train a model for inverse inference. This is done by introducing RIMs and use them for iterative inverse process, a series of inverse operations from the output all the way to the input. Simply, put this is basically running RIM for inverse each of the layers to bound the memory. The other part of the paper is to introduce a set of invertible layers (e.g. orthogonal convolution / residual block with spatial downsampling). These parts are somewhat apart from the proposed method above. While certainly this is a good contribution, but the coherence of the paper was blurred from adding this. My overall impression of the paper is a direct extension of the RIM paper plus a few useful tools for inversible neural network especially for vision. This makes this paper more like a swiss-army-knife than a very solid piece of in-depth work. Although I believe this paper would be very useful for Computer Vision community, somewhat blurry focus of this paper makes me hesitant to recommend this paper to be accepted as-is. -------------------------------------------------------------------------------------------------------- I thank authors' rebuttal and it did address some of my concerns (especially the coherence part). I am happy to recommend this paper to be accepted.