Paper ID: | 2304 |
---|---|
Title: | Efficient Neural Codes under Metabolic Constraints |
The authors derive analytic solutions for the optimal input-output function (in terms of mutual information) of a neuron given a family of constraints on the metabolic cost of firing and the neural noise. This framework provides a theoretical motivation for having an ON-OFF encoding motif (as with ON/OFF retinal ganglion cells) with non-overlapping response regions, rather than ON-ON or OFF-OFF.
Although the analytic solution is interesting, these theoretical results are perhaps only incremental compared to existing results. I would like to know more about what this model can tell us about real neurons. I doubt biological neural systems behave with the precise optimality given in this paper and I am left with the question: How optimal is optimal? For instance, if the estimate of the input distribution is off slightly, how suboptimal is the encoding? The encoding model here assumes a constant input distribution. Can this framework tell us about efficient coding for stimuli with time-varying statistics, like contrast?
2-Confident (read it all; understood it all reasonably well)
The authors extend previous on optimal neural codes under metabolic constraint by formulating the optimal coding problem as a constraint optimization problem. They derive optimal monotonic tuning functions for single neurons, pairs of neurons and populations.
The authors derive optimal monotonic tuning functions under metabolic constraints by reformulating the problem as a constraint optimization problem, which apparently can be solved in closed form. Overall, the paper is of very high quality, but it is too dense and covers too much material for a 8 page NIPS paper. Before I make some more specific comments, I would urge the authors to focus on the single neuron/pair of neuron cases for this paper and leave the population analysis for a later publication or a extended journal version. The population part is only sketched in the paper and I am not quite sure if I understand the results and implications (also, there is no figure for this part, which doesn’t help understanding it better). Specific comments: Equations 3 and 4: I am not sure I understand how equation 4 is a special case of equation 3. Figures 1-3: I find those figures very hard to parse. One reason is that the color code is missing (it is kind of explained in the text, but it would help to make it more explicit). In several cases, the authors state that Poisson noise is a good approximation of real neurons and I don’t think this is true (among many others: Goris et al. 2014, Gur and Snodderly, 2006; Stevenson 2016). I don’t think it matters for the theoretical work presented here, I would just not make such strong statements about it. Apart from that, it is nice to see that it makes a difference for some of the analysis whether Gaussian or Poisson noise is assumed. The pairs of neurons case: I find this section VERY hard to understand. First, to me it is not obvious that the metabolic constraint should be 2K; why is there more resources if there are more neurons? I think an interesting scenario would be to study the case where you want to represent a variable with an overall fixed metabolic cost and want to know how to best do that (few neurons with high metabolic expense or lots of neurons with low metabolic cost). In any case, the description of the results on page 5 is very difficult to read and comprehend due to the dense notation (e.g. sup A- smaller than inf A+). Unfortunately, Fig. 3 does not help a lot. What are the dashed lines going to the left? The authors should also compare these results in more depth to previous studies, e.g. Bethge et al. 2002 (their Fig. 4). As a final comment, as many have done before, the authors resort to the large T, low noise regime for their analysis. Several studies have shown that approximations based on Fisher Information (of Mutual Information in this case) do not hold in the small T, high noise regime (e.g. Bethge et al. 2002, Xie 2002, Berens et al. 2012) and the authors may want to comment on how this impacts their results and discuss this in some more detail.
2-Confident (read it all; understood it all reasonably well)
The paper derives optimal responses (maximizing the mutual information between the stimulus and the response) subject to limitations in (i) the range of the responses, (ii) their monotonic nature, and (iii) in the energy budget. Moreover, the kind of noise in the responses (including Gaussian and Poisson noise) is taken into account. The paper is interesting (and I recommend acceptation) since analytical results are illustrated in a number of sensible cases. Even though I did not check the proofs in the supplementary material, the approach here may be useful to provide stronger theoretical grounds and extend previous NIPS contributions (e.g. ref. 13 by Karklin & Simoncelli).
While the analysis of the results for the case of a single sensor is quite intuitive, the case of two neurons was harder to follow (for me). In particular, how the "active region" concept is related to the receptive field concept?. In the end, both concepts involve weighting the input with some coefficient or slope. Nevertheless, it is not obvious how the considered on-off schemes (in the large population case) would involve spatial opponency (as in center-surround LGN cells or the on-off filters in ref. 13). I'd like a little bit more of intuition there, but I understand it is hard to deal with the 9 page constraint.
2-Confident (read it all; understood it all reasonably well)
This paper analytically derives optimal neural codes, given a limited set of constraints: monotonic responses, neural noise, and metabolic cost. Application of this simple framework to neuron pools of various sizes (one neuron, two neurons, and several neurons) produce optimal neural codes which agree with several experimental facts, notably the segregation of neurons belonging to a large neural population into an ON-pool and an OFF-pool.
On the form: the main paper may contain minor typos: - Line 42: "observed" - Line 53: "or just the mean" - Line 79: "understood" - Figure 1 caption: "The process of determining" - Line 164: "a significant advantage" - Line 168: "compared to" - Line 171: "this ... is" or "these ... are" - Line 188: "may be one of the main drives" or " may be the main drive" - Line 207: "overlap (see Figure ?)." - Line 208: "We ask that..." or "We ask how..." - Line 246: "Including these factors into the framework should allow..." The signification of the 4 colors (red to yellow) in figures 1 and 2 is not obvious: it should be clarified explicitly, preferably in the figures and/or in the caption.
1-Less confident (might not have understood significant parts)
This work analyzes information transmission in neurons with monotonic tuning curves under resource and noise constraints. Given a small set of assumptions it demonstrates analytically that splitting into ON and OFF channels maximizes information transmission in a neural population with limited resources.
This is a high quality submission with a potentially high impact on the field. It generalizes and extends previous results [19, 30]. Moreover, it provides a possible theoretical foundation for results obtained with statistical models of natural stimuli [13]. What I especially like about this paper is the generality of obtained results. The authors do not limit the analysis to a single constraint but consider a number of them at the same time: dynamic range, average activity level and a family of noise models. The results are independent of stimulus distribution and make very little assumptions about the shape of tuning curves (monotonicity). Despite the "density" of the paper the presentation is very clear. There are a few questions/remarks I have, which I think could be addressed in the text in order to further improve the quality the paper and interpretability of the results: 1) Are proposed coding constraints independent? There seems to be a relationship between the maximum attainable $r$ and average $K(r)$. If yes - can we treat them independently as the authors do? 2) The results presented in the paper are independent of the stimulus distribution. It is of course a strength, however a number of previous studies (e.g. [13]; Ratliff et al., PNAS 2010) suggest a strong connection between the statistics of natural stimuli and the presence of "ON-OFF" channels. How shall one interpret present results in light of previous statistical findings? 3) The above question becomes especially relevant with respect to section 4 - "Optimal coding of large neural population". You show that an optimal solution for a population of neurons is to divide it into two equally large ON and OFF subpopulations. It is known however, that there is a larger proportion of OFF units in the retina. Moreover it seems to be explained by the excess of dark spots in natural images (Ratliff et al., PNAS 2010) i.e. stimulus distribution. How would you explain that discrepancy? 4) Please provide a brief justification in the main text for the assumption of even number of neurons ($N = 2k$). 5) Line 207 includes a remark "show in figure". While I understand that it's a leftover from the editing process, I agree it would be good to show on a picture how ON and OFF tuning curves tile the input space (since, as I understand this is the solution). 6) I like the reference to opponent-channel codes in the auditory system. There have been results published suggesting that this sort of representation maximizes information transmission given the statistics of the natural auditory input [Harper, McAlpine, Nature 2005; Mlynarski, Plos CompBio. 2015]. I would however urge the authors to be careful with this comparison. Spatial tuning curves in the auditory brainstem and cortex are non-monotonic (they peak at extremally lateral positions, sometimes outside of the so-called "physiological range"), and they strongly overlap at the midline (which you address in the text already). Minor comments: It might be a good idea to add a small color legend to your figures explicitly saying that yellow / red spectrum corresponds to different constraints. Figure 2 - I would mention that it is for a uniform variable $u$, not $s$ Linr 152 - supplementary material mentions that due to monotonicity of the tuning curve one of A^+ or A^- regions has to be empty. It would be good to write that in the main text for clarity.
2-Confident (read it all; understood it all reasonably well)
This paper provides an analytic solution for optimal neural coding under metabolic constrains. Specifically, the authors show how the limited metabolic resources and the finite range of the neural output can affect the response function. The optimal response function has been found by maximizing the Mutual Information between the actual neural response and the input stimuli under the mentioned constraints. The authors solved the problem for three cases: the single neuron, a pair of neurons and large neural population. Their solution is consistent with biological findings, especially for the pair of neurons case in which the ON-OFF scheme results to be twice as efficient as the ON-ON scheme.
As general question I would like the author could argue about the possibility to introduce some dynamic phenomenon in their framework, such as Spike Frequency Adaptation. Since this phenomenon could last tens of seconds (see for example, Pozzorini, C., Naud, R., Mensi, S., and Gerstner, W. (2013). Nature Neuroscience) and it changes the response function through time, it would be interesting to be taken into account. Compared to the presented scheme, I would expect an higher metabolic cost in the initial part of the response, which could prevent some neurons to fire, and an adaptation of the neuron response as a function of information maximization and metabolic constrain. Regarding a more specific comment, Section 4 refers to a figure (line 207) that does not exist. Please correct.
1-Less confident (might not have understood significant parts)