NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Reviewer 1
I really liked this paper. However, as someone not yet familiar with the attribution method, I found the algorithmic details quite sparse. I would like a description of how the attribution *algorithm* works, and not just the mathematics underpinning it. For example, from Eq. 2, it looks to me like attribution could be done by using regression between r(t) and the W*s convolution, to find G_{cxy}. Is that what the authors did? Was there any regularization to that regression? How were the regularization parameters chosen? These details matter for readers who might want to use this approach in their own work. It would also be nice for the authors to share their code; I'd like to mess around with these methods myself, and I suspect the same will be true of other readers.
Reviewer 2
Originality: The paper applies integrates gradient methods to DNNs used for predicting neural activity in the retina for identifying important subunits to build reduced models, which lend them self to interpretation. This is a original contribution with potential in the field. The paper does feel like a follow up to ref. 2, and Figure 1 seems almost a complete reproduction of a figure in ref. 2. Quality: The analysis presented in the paper is of uniformly high quality, but the paper is so strangely structured, that it dampens my enthusiasm significantly, despite the potential of the method. The Introduction is extremely long, at the expense of a very superficial discussion of the individual parts of the results, which contain the really interesting new bits of information. Also, many interesting derivations, explanations and discussions are in the supplement, for example the justification why the methods works or which information it provides. Also, e.g. the figure caption of Fig. 2 is almost a whole page, containing information which rather should be discussed in the main text. As it is, the papers focusses almost exclusively on the discussion of the “new models” but fails to highlight its methodological contributions. In their reply, the authors promised to restructure the paper significantly; I am not certain, however, whether this can still be fit within the scope of the review process at NeurIPS, or will require so significant revisions, that it will have to be submitted elsewhere for rereview. Clarity: Together with the supplement, the methods are clear and most model explanations make sense. Significance: Potentially large, but paper not suited for NeurIPS in its present form.
Reviewer 3
This manuscript aims to attack an interesting problem, namely how could one obtain mechanistic insights from the CNN model fit to the neural responses. The writing is generally clear, although it would benefit to tone down some of the statements to more accurately reflect the real contributions. Overall, the manuscript could be an interesting contribution to the field. However, I am skeptical about various claims made in the paper. The main issues I have with this manuscript are three-fold: 1.the results is rather incremental relatively to ref [2] and [9,10]. 2.it is unclear to me to what extent the insights as claimed in Section 3 are novel, and to what extent they could not be obtained by taking a much simpler approach. 3.it is unclear to what extent the proposed algorithmic approach could apply to other systems, such as high level visual cortex. Major * Questions/comments about the proposed algorithms: the first step is based on the attribution methods developed in [ref 9,10], and a simple extension to incorporate the temporal domain. This step is largely incremental. the second step reduces dimensionality by exploring stimulus invariance. How could this generalize to more complex stimuli where the stimulus invariance properties are unclear? step 3 involves constructing reduced model from “important” subunits. The number of selected subunit is not discussed and seems to be arbitrary. Practically, how to set this number for a given problem? For example, unit 1 in Fig2B,C also seems to be important, but it is not selected. *Results on OSR For OSR results, the study identified three channels, A2,A3,A6. How do these channels map onto the physiologically defined cell types in retina? Is there experimental evidence in any form that would support this model prediction? The asymmetry in the physiologically observed nonlinearity in ON/OFF pathways [Chichilnisky& Kalmar, 2002; Kareem et al., 2003] are not accounted in the model. Could it be possible that by taking these asymmetry into account, one just need two channels, rather than three to account for OSR? Ref: Chichilnisky, E. J., and Rachel S. Kalmar. "Functional asymmetries in ON and OFF ganglion cells of primate retina." Journal of Neuroscience 22.7 (2002): 2737-2747. Zaghloul, Kareem A., Kwabena Boahen, and Jonathan B. Demb. "Different circuits for ON and OFF retinal ganglion cells cause different contrast sensitivities." Journal of Neuroscience 23.7 (2003): 2645-2654. *Interpretation of the results Regarding the results shown in Section 3.2-3.4, to what extent doe they provide novel insights? Could it be that given the response characteristics of the ON/OFF channels and the observed phenomena, these is effectively only one mechanism that could account for the observed data? In that case, it seems that one would not need to go through the exercise of reducing from a deep network model to obtain such results; one could directly constrain the input and output then find the transformation that could link the two. Relatedly, for the results presented regarding the four applications, how much of these one could already obtain by using the standard RF +nonlinearity analysis? *Applicability/complexity of the approach is it possible to apply the proposed approach to analyze deeper neural networks? The manuscript analyzed a 3-layer CNN. Intuitively, the complexity of the analysis might scale exponentially with increasing number of layers. It is unclear if the proposed dimension reduction approach would still be effective in these cases, e.g., CNN for the object recognition. Note that for object recognition task, the simple invariance assumption as assumed in the paper to achieve dimension reduction might also be violated. Is there any claim could be made about the complexity of the analysis for deeper networks? * The special role of A2&A3 By examining all the results together, it seems that most of the effects are readily accounted by cell types A2 and A3. I feel that this should be more carefully discussed somewhere in the paper. Also is it possible to use a CNN model with 2 or 3 channels rather than 8 to account for the results? Other comments - I found the title to be a bit mis-leading,, in particular “prediction”. The manuscript does not directly the connection of the phenomena to the function of prediction. The same concern applies to the abstract (line 14-16). - line 34, how is the “computational mechanism” defined? - line 36, what does “non-trivial response” mean? - “burst” is used in various places. However, the CNN model doesn’t contain any spiking mechanism. What does “burst” mean in the context of the CNN model? - Step 3 involves reconstructing a network with one hidden layer neural network. Would it be expressive enough to explain a variety of computational mechanisms? For example, consider the task of object recognition. - line 62-64. it should be made clear that all the results in fig.1 have been previously shown in ref [2]. - line85-87, These are simply rephrase the phenomena. Maybe cut these? - line 134, “far beyond the imperfect but dominant”, maybe could tone this down a bit? In principle, nothing is perfect, including the RF analysis. %%%% After discussions and seeing the authors' rebuttal letter, I still have concerns about the applicability of the proposed method to other problems. However, I think the authors' response did help clean up a bunch of issues, so I am increasing my score from 5 to 6.
Reviewer 4
In this paper, the authors present an approach to extract mechanistic insights from deep CNNs trained to recreate retinal responses to natural scenes. Specifically, the authors use a combination of model attribution methods and dimensionality reduction to uncover cell types from the CNN that explain nonlinear retinal responses to four classes of stimuli. The authors uncover mechanistic understanding of latency coding, motion reversal responses, and motion anticipation in the retina that fits with prior scientific findings and models. The authors uncover a new model for omitted stimulus responses that is better able to explain retinal responses than prior models and forms a testable hypothesis. There are important limitations to the work presented here. The methods depend on artificial stimuli with spatial invariances and it is unclear that these methods will extend to more complex stimuli. The authors state that perhaps other stimuli could be reduced using PCA or similar methods but this paper would be more impactful if the authors demonstrated this or even discussed possible future directions in more detail. Additionally, the authors mostly recreate known retina phenomena and mechanisms. They do yield a new testable model of OSR but since this is not tested yet, it is unknown if their approach yielded new knowledge about the retina. Providing some experimental follow-up on the scientific hypothesis generated by this work would be extremely impactful. I think the paper should acknowledge and address these limitations/caveats more thoroughly - the work felt overstated at times. Despite this, I think this paper is novel and significant. Moving the relatively new field of fitting neurons with deep networks beyond simply improving predictions to gaining scientific understanding is extremely important and this paper is a solid start to these efforts. It is encouraging the the deep CNN was trained on natural scenes and not specifically on the four classes of stimuli. The paper is well-written and relatively easy to understand. Minor comments: I disagree slightly with the emphasis that deep networks fit to neural responses must yield computational mechanisms matching intermediate computations in the brain to be useful. This is one particularly good avenue of research but deep networks predictive of neural responses could be used to find testable experimental phenomena (like those presented in this paper) through artificial experiments or to better understand neural computations at a more abstract level. The paper is unclear whether the authors find the same 3 cell types to explain the responses for each stimulus - this is mentioned in the Figure 6 caption but is not emphasized elsewhere. All figures should be larger for clarity. Figure 1 B-E are not very helpful without more explanation for readers unfamiliar with these concepts. The colors in Figure 2E are hard to distinguish - maybe use different colors or a wider range of blue shades? EDIT: I've read the author response - it was thorough but did not convince me to change my score.