NeurIPS 2020

Robust Disentanglement of a Few Factors at a Time using rPU-VAE


Meta Review

This paper proposes a new method for learning disentangled representations with VAEs. The main contribution is that the proposed method is robust, in the sense that training reliably converges. This addresses a shortcoming in existing methods, where the quality of the learned representation is often highly variable. The approach proposed approach involves a recurrent process which trains beta-TCVAE models to disentangle a subset of dimensions at a time. The discovered dimensions are then removed from the dataset through an automatic process, and another set of beta-TCVAE models are trained to discover some of the remaining disentangled dimensions. This process is repeated until all disentangled dimensions are discovered. Each iteration uses PBT for hyper parameter tuning, combined with the recently proposed unsupervised UDR score for model evaluation. This submission was positively received by reviewers, who found the experimental results convincing, and noted that the submission indeed addresses a key issue with existing approaches by improving robustness. Several reviewers did not issues with clarity and presentation. The authors provided a detailed response, and reviewers' opinion of the paper improved during discussion (please see updated reviews). The AC will recommend this submission for acceptance with a couple of caveats. Having looked at the submission, the AC is inclined to agree with reviewers that this submission in its current form is a little rough around the edges. Please work carefully to implement all changes that were discussed in the response, address all remaining minor comments by reviewers, and make sure that the camera ready version of this paper is impeccably presented. The AC would also like to gently correct the authors on a couple points of communication (which will help them in future borderline cases): First, please break up your author response into paragraphs. Making key points stand out is much more effective than making as many points as possible. Second, please do not use optional comments to chairs to further argue your case. The AC carefully looks at every paper, author response, and review, and will in many cases communicate with reviewers to clarify their reviews prior to the response phase. Reserve your comments for exceptional cases, e.g to request an additional review or to point out a potential form of reviewer misconduct. Finally, please be consistent in referring to all reviewers as he/she or they; inadvertent inconsistencies will be noticed and can be easily misconstrued.