NeurIPS 2020

Woodbury Transformations for Deep Generative Flows

Meta Review

The paper proposes to parameterize a linear transformation as a low-rank update to an identity matrix, and then use the Woodbury matrix identity to efficiently compute its inverse and the Sylvester determinant identity to efficiently compute its determinant. Some reviewers expressed concerns regarding novelty, which I share. Indeed, the proposed linear flows are a fairly straightforward application of well-known matrix-algebra techniques for inverting and calculating the determinant of low-rank updates. In fact, TensorFlow probability already uses the Woodbury matrix identity and the Sylvester determinant identity in their `Affine` bijector: For that reason, I doubt this paper contains much new information for normalizing-flow experts, although it may be useful to a broader machine-learning audience. Having said that, this paper is well-written and well-executed, contains some novel extensions to the basic idea, and it's likely the first published version of Woodbury flows with careful experimental comparisons to alternatives. For that reason, and given that the reviews are mostly positive, I've decided to recommend acceptance of the paper. Nonetheless, the reviewers have raised a few concerns that I would encourage the authors to take to heart when preparing the camera-ready version. In particular, given that the differences between methods are very small, error bars are necessary to judge significance. Also, even though we are not used to citing software, I would encourage the authors to at least mention that similar ideas have been used by TensorFlow probability before.