__ Summary and Contributions__: Derives a biologically plausible version of SFA, and runs it on a standard moving image datasets, obtaining results similar to previous batch learning methods.

__ Strengths__: An elegant formulation of the problem and nice solution.

__ Weaknesses__: Perhaps could benefit from stronger ties to potential neural mechanisms, or where in the brain you may expect to find this, experimental predictions, etc. It seems this is the main point after all.
Also, how do you propose that biology computes the nonlinear expansion \mathbf{x}_t?

__ Correctness__: yes as far as I can tell

__ Clarity__: Yes.

__ Relation to Prior Work__: The development in section 2.1 and 2.2 seems straight of Wiskott's papers, but you wouldn't really know that from reading the text. Its cited by numbers, but I think highlighting more directly in the text would be helpful.

__ Reproducibility__: Yes

__ Additional Feedback__: I find the paper interesting, but somehow not super interesting. It's never seemed an issue to me that you could have a biologically plausible version of SFA. Foldiak's original model was formulated that way after all. So not sure what this development adds in that context. I'm sure there advantages to this approach, but it would be good to bring them out more.

__ Summary and Contributions__: The paper develops a slow feature analysis (SFA) method that is biologically plausible, in that it operates online and has weight updates that only use information that is available in the presynaptic and postsynaptic neurons. The method is tested on several datasets, and it is shown that cost is reduced with training and that interesting receptive fields emerge.

__ Strengths__: - I agree with the authors that a biologically plausible implementation of SFA is an important development.
- The development of the method from standard SFA was systematic and rigorous.

__ Weaknesses__: - Some limitations were mentioned in the discussion. Of these, the fact that the method uses linear neurons seems to be the most serious. I appreciate that the authors defined their sense of "biologically plausible", but linear neurons don't sit very well with this term.

__ Correctness__: Yes, it seems to be correct.

__ Clarity__: The writing is clear and well organized. The math was clearly and efficiently communicated.

__ Relation to Prior Work__: Yes.

__ Reproducibility__: Yes

__ Additional Feedback__: - I had regrettably not checked many of the equations in the first round, but I have checked most of them now.
- Thank you for addressing my minor concern about the assumption of full rank in Line 108. I look forward to the updated explanation in the final version of the paper.

__ Summary and Contributions__: This paper produces a so-called biologically plausible neural network for slow-feature analysis.
Biological plausibility here means that network learning is online and based on local synaptic learning rules. These online and locality requirements might lead to low computational overhead. While Foldiak, Wiskott, and many others have explored online local learning for SFA in the last thirty years, this paper attempts to relate SFA to a normative theory through an MDS objective.

__ Strengths__: While Foldiak, Wiskott, and others have explored local online learning for SFA in the last thirty years, this paper successfully relates it to the normative MDS objective. Similar work on biologically plausible implementations has been one-dimensional; this work extends that to multiple dimensions.

__ Weaknesses__: The conceptual and theoretical innovations are limited, which is not surprising given that the problem has been worked on for the last thirty years, most notably by Wiskott's lab. Claim of biological plausibility seems weak, limited only to local learning rules and online learning.

__ Correctness__: The basic idea and the logic seem to be sound, though I have not scrutinized all the equations.

__ Clarity__: Reasonably clear and well written.

__ Relation to Prior Work__: This work seems to be a marriage of Wiskott's line of research and Pehlevan and Chklovskii's recent approach.

__ Reproducibility__: Yes

__ Additional Feedback__: I read all the reviews and rebuttal, and can see the potential contribution relative to existing work better. I keep my score. It is an acceptable paper.

__ Summary and Contributions__: While SFA is a classical and widely used unsupervised learning algorithm, a neural plausible neural algorithm for SFA is still missing. This paper presents a solution and the numerical results well support the proposed method. So overall, it's a very useful piece of work.

__ Strengths__: 1. The presented method is very useful to the computational neuroscience community.
2. The paper presents a reformulation of SFA, which leads to a neural plausible SFA algorithm.
3. The numerical results seem to be solid and well support the proposed theory.

__ Weaknesses__: There are some minor typos and notation issues, just like most of the NeurIPS drafts. If accepted for publication, the authors should definitely make an effort to polish the paper.

__ Correctness__: I checked most of the derivation and I think they are correct. Except there are a few minor typos, e.g. the equation above equation (7), V and V^T are missing. The optimization above equation (8), a transpose is missing. Things like these should be fixed but do not bother me much.

__ Clarity__: Section 2 is too long, some of the unnecessary review shall be moved to the appendix. Section 3 shall be expanded and discuss the major results more thoroughly.

__ Relation to Prior Work__: This piece of works has been missing for a long time. I think it can fill the gap.

__ Reproducibility__: Yes

__ Additional Feedback__: