NeurIPS 2020

Attention-Gated Brain Propagation: How the brain can implement reward-based error backpropagation

Meta Review

The reviewers agreed that this paper provides an important contribution to the biological learning literature, and agreed that it should be accepted. However, the reviewers were also in agreement that the authors must do the following for the camera-ready version of the paper: 1) Provide greater clarity that this is an extension of AGREL and does not involve any changes to the core AGREL algorithm, but rather, a means of gating the attention signals sent back through multiple layers. which helps for deeper networks. 2) Be clear that this is a model for deep RL, but only for one-hot policies. 3) Be clear that this is not a complete solution to the question of deep credit assignment and doesn't address some outstanding questions, most notably, the question of symmetric weights and the timing of feedback vs. feedforward signals. 3) The BrainProp name needs to change - it is too generic a name and tells readers very little about the algorithm.