Duplex Sequence-to-Sequence Learning for Reversible Machine Translation

Part of Advances in Neural Information Processing Systems 34 pre-proceedings (NeurIPS 2021)

Paper

Bibtek download is not available in the pre-proceeding


Authors

Zaixiang Zheng, Hao Zhou, Shujian Huang, Jiajun Chen, Jingjing Xu, Lei Li

Abstract

Sequence-to-sequence learning naturally has two directions. How to effectively utilize supervision signals from both directions? Existing approaches either require two separate models, or a multitask-learned model but with inferior performance. In this paper, we propose REDER (Reversible Duplex Transformer), a parameter-efficient model and apply it to machine translation. Either end of REDER can simultaneously input and output a distinct language. Thus REDER enables {\em reversible machine translation} by simply flipping the input and output ends. Experiments verify that REDER achieves the first success of reversible machine translation, which helps outperform multitask-trained baselines by about 1.3 BLEU while obtaining 5.5$\times$ speedup during inference.