NeurIPS 2020

Learning efficient task-dependent representations with synaptic plasticity


Meta Review

This paper proposes a stochastic recurrent neural network that builds up its local information representation through a learning rule based on Boltzmann machines, but weighted by a task-dependent objective function, forming a so-called tri-factor learning rule. The results show how the network depends on the tasks of regression and classification in terms of the distribution of the tuning curves, population-averaged activities, and dependence on stimulus priors. The paper then considered how noises are redistributed in the neural manifold such that task performance can be achieved. Reviewers were overall positively predisposed towards this submission. Strengths include the coherent derivation of the proposed learning rule and the thorough analysis of its properties. The main point of contention in the reviews, which was based by one reviewer, was whether the proposed learning rules are in fact novel. This reviewer noted that the proposed rule appeared very similar to the the rules in Eq (7) of reference 34 and Eq (16) in a paper by Legenstein et al. The AC recruited a 4th emergency reviewer, who also expressed a positive opinion of the paper. The AC also took a look at the discussed references to understand the degree of novelty of the proposed learning rule. This is not straightforward, since these rules are discussed in papers that are decades apart, and consider different settings and notation. From the author response, it is clear that the proposed rule is not the same as the rule in the Williams paper. It is also not clear to the AC that the proposed rule is the same as Eq (16) in Legenstein et al, which computes a covariance between the total synaptic input and the reward. Eq (7) in Ref 34 appears to compute a scalar variance between the firing rate and readout. Both equations rely on a low-pass-filtered time-average, which the proposed equations do not. The ACs best judgement is that, while it could be the case that these equations are in some respect equivalent to the proposed learning rule, any equivalence is not trivial. Based on the fact that the main claimed contribution is the application of this learning rule to the proposed recurrent network, the AC is inclined to say that acceptance is warranted. The AC would like to strongly request that the authors revise their manuscript to clearly and explicitly discuss in what ways the proposed rule is similar rules that have been previously considered in the literature, and in what way it is differs. This will strengthen the paper, since other readers will likely have a similar confusion.