NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:1331
Title:Disentangled behavioural representations

Reviewer 1


		
The proposed architecture consists of an encoder RNN which maps input sequences to low-dimensional representations, a decoder network which maps this latent representation to weights of an RNN (similar to Hypernetwork). This RNN is used to predict labels in the input sequence. The whole network is trained end-to-end. This sort of an RNN-based auto-encoder + hypernetwork combination is novel and interesting. Results on a synthetic time-series dataset and the BD dataset show that the proposed architecture is indeed able to disentangle and capture factors of the data generating distribution. Experiments overall, however, are quite limited. It would be great to exhaustively compare to 1) an RNN-based autoencoder without the hypernetwork, 2) ablative experiments with and without the disentanglement and separation losses, 3) within the disentanglement loss, contribution of MMD vs. KL and analysis of the kinds of differences / behavior each induces. This would help tease apart the significance of the proposed architecture. Also, disentanglement_lib (https://github.com/google-research/disentanglement_lib) is a recent large-scale benchmark and presents several standard metrics for evaluating disentangled representations. It would make for a significantly stronger contribution to evaluate on this and compare against prior work. As things currently stand, the proposed architecture is interesting but hasn't been evaluated against any prior work.

Reviewer 2


		
After rebuttal: I focused my comments and attention on the utility of this method on providing behavioral representations, whereas R3 and the authors drew my attention to the novelty of the separation loss, and their specific intent to primarily model _individual decision-making processes_, and not behavior more generally. The text could use clarification on this point in several places. I still do think that existing PGM-based approaches with subject-level random variables are a fair baseline to compare against, since they do create latent embeddings of behavior on a per-subject basis (with e.g. per-subject transition matrices in HDP-HMM models of behavior), but want to recognize the novelty of the architecture and approach. -------------------------------------------------- Originality: There is a large body of work for machine learning modeling of time-series behavior with interpretable latent spaces. I find the originality low in the context of that prior work, including Emily Fox's work on speaker diarization and behavioral modeling, Matthew Johnson's recent work on structured latent space variational autoencoders, and others. Given the input of the model is a bag of low-dimensional sequences, probabilistic graphical models are an appropriate baseline here. Quality: The writing, construction of the model, and evaluation of the model are sound. However, the model is not compared to any alternatives. It is difficult for me to place an absolute significance on the work if it is not compared to even a naive or strawman baseline. For instance, if you chunked the input sequences, did PCA on all chunks, and averaged the embedding, how close would that get you to the properties of the model being proposed? Is an RNN even necessary? If the ultimate goal (or at least stringent evaluation) of this model is to tell apart different treatment groups, then set up a classification task. Clarity: The work is clearly written and the figures are well-constructed and present information clearly. Significance: The work is if low to moderate significance, given the context the work is in. Directly comparing to alternate ways of capturing an interpretable latent space would help lend significance, if this method was indeed better, faster or easier to use than alternate methods. The model is clearly capturing task-related information in figure 4, but I have no idea if an RNN is required to do this, or if a much simpler method could do the same. Without this backstop, I don't know if this is an interesting method.

Reviewer 3


		
After rebuttal: Having read the rebuttal and other reviewers' comments, I still think this is a strong paper. I don’t consider the novel contribution to be the disentanglement of representations, but rather the separation loss and interpretability, which they have done appropriate ablations for. A more thorough set of ablations on all of the different components would have been nice to see, but unnecessary in my mind, since they don’t claim their main contribution to be hypernets or disentangling. Furthermore, they’ve validated (to a limited extent) on a real-world dataset, which was key to my high rating. That said, they could have gone much further in this respect, including addressing my point about needing to know the exact number of latent dimensions and how not knowing this would affect their method. I encourage the authors to consider adding this if accepted. R2 does have a point about comparing to some non-RNN baselines, which would have made the paper stronger. The Dezfouli et al 2018 paper was using only supervised learning, and doesn’t consider other methods suggested by R2 like HMM (although it didn’t have the same goals as the current paper). ======================================== Before rebuttal: This paper introduces a new method of training a deep encoder-recurrent neural network in order to map behavior of subjects into low-dimensional, interpretable latent space. Behavioral sequences are encoded by an RNN to map into a latent space, which are then decoded and used as the weights (hypernet style) of a second RNN, which is trained to predict the behavioral sequences of each subject. The training loss includes a reconstruction loss, a disentanglement loss, and a separation loss. The separation loss is a novel contribution, and encourages the effect of the latents on behavior to be separable. The analyses and figures are extremely well-done and clear. They show that on a synthetic dataset, generated by a Q-learning agent with 2 parameters, this method can recover latents that correspond to these 2 parameters in an interpretable way. Without the separation loss, this doesn’t occur. They further tested on a real-world dataset consisting of behavioral data from subjects with depression and bipolar disorder and healthy subjects. They found that one of the latent dimensions could differentiate between the groups, and further that the distances in latent representations between groups was larger than within groups, validating the usefulness of their approach. The explanations were clear and at an appropriate level of detail. The figures are great, I especially found the off-policy and on-policy simulations to be illuminating and very nice. Along with the supplementary text, the experiments seemed quite thorough. However, I would have liked to see what happens when the number of latent dimensions doesn’t exactly match the number of parameters in the agent generating the synthetic dataset. For real behavioral data, we don’t know the number of relevant dimensions, and it’s not clear to what extent this method relies on knowing this. The separation loss is quite novel. To what extent can it be generalized to considering not only separating choices, but also options? Could the authors provide an intuition for how good the approximation in equation 10 is? Very minor: please provide a citation for “... which is consistent with the previous report indicating an oscillatory behavioral characteristic for this group”.