NeurIPS 2020

Zero-Resource Knowledge-Grounded Dialogue Generation


Meta Review

This paper proposes a novel approach based on graphical models for zero-resource generation of knowledge-grounded dialogues. The approach introduces two latent variables to represent knowledge to ground the response on and to represent the degree of grounding, and is trained with variational inference. The proposed approach achieves comparable results to state-of the art methods trained with expensive-to-collect annotated data, without any such collection, and generalizes to unseen topics better than the supervised approaches. While a majority of reviewer questions are answered by the rebuttal, the reviewers still have a few points that could be clarified in the next revision.