NeurIPS 2020

Characterizing emergent representations in a space of candidate learning rules for deep networks

Meta Review

This paper asks a set of well-motivated questions about learning of sensory representations by biological brains through experience, and proposes a continuous two-dimensional space of candidate learning rules, parameterized by levels of top-down feedback and Hebbian learning. They first show that this space contains five important candidate learning algorithms as specific points, such as Gradient Descent and Contrastive Hebbian. They then analyze the learning dynamics of these rules in a linear network with one hidden layer, trained to learn a hierarchy of concepts, following the previous work, and identify zones where deep networks exhibit qualitative signatures of biological learning. The work includes an interesting way to parameterize learning rules and aims to tackles the well-motivated problem of characterizing which learning rule is implemented in the biological brain. The model used in the paper is overly simple. The dataset and the task are also simplistic and linearly separable, and could be replaced with a task that is more specific to human learning.