NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:2485
Title:Double Quantization for Communication-Efficient Distributed Optimization

The paper combines model and gradient compression, which is an interesting and relevant topic. It combines these aspects with asynchronous SGD updates and momentum. While reviewers uniformly liked the main contributions, they also agreed that the current literature overview is insufficient, and that scaling experiments are not impressive enough in terms of time savings from 4->8 nodes and were only presented for small networks so far. This was partially addressed in the rebuttal. We strongly encourage the authors to improve related work and the other issues mentioned in reviews and in the rebuttal phase. Additional relevant work for example includes (appearing simultaneously), and , and the line of work around and the references therein.