NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:1823
Title:Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control


		
The paper proposes Variance Based Control (VBC) of communications in cooperative multi-agent RL settings. This results in agents sending only high-variance messages. As noted in the Abstract, VBC achieved 2x-10x reduction in communication overhead compared to state-of-the-art MARL settings. The paper also gives a proof of convergence in a tabular setting. In the initial reviews, R4 gave strongest support with a score of 9, while R1 and R2 gave positive overall scores but only at marginally above threshold (6). After receiving the author feedback, there were minimal updates to the original reviews, e.g., R2 said "After going over the author response I appreciate the extra analysis put into comparing the method to MADDPG to make sure it is state of the art. It is good to compare these methods across previous benchmarks to show improvement. While the additional hyperparameter analysis is helpful it is a bit obvious of what is normally done. Some discussion on the effects of specific settings might shed more light on how the method works." There was not a lot of discussion between individual reviewers. It seemed that everyone was satisfied with their individual scoring based on the positive or negative notes mentioned in the individual reviews. I think this explains the discrepancy of scoring by R4 vs. R1 and R2. R4's review focused focused on major issues of Originality, Quality, Clarity, and Significance, and I agree with R4 that these aspects are quite strong in the paper. On the other hand, R1 and R2 gave additional focus to lower-level issues and questions, for example: -- "Lack of explainability of exchanging a bunch of floats." -- "Confidence level is naive" -- "Unsure of significance" -- "concerned about narrow application" -- "how to set hyper-params"