NeurIPS 2020

The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning

Meta Review

This paper proposes a method for identifying model-based behavior in RL agents (the “LoCA regret”), which can be used without knowing anything about the internal structure of the agent itself. This method is demonstrated to correctly distinguish between classical known model-free and model-based agents. It is also used to analyze MuZero, revealing that although MuZero is in principle a model-based algorithm, it does not make optimal use of its model. The reviewers agreed that the LoCA regret is a useful metric, and felt that doing careful evaluation of agents by designing metrics like this is an important area of research in RL. I agree, and found very interesting the demonstration that just because a particular algorithm makes use of a model, doesn't necessarily mean that the algorithm will have the properties that we think of as being associated with model-based algorithms. While there was some debate during the discussion period about some of the choices regarding the calculation of the LoCA regret (e.g. top-terminal fraction), the reviewers came to the agreement that the metric as presented is worthy of publication. Indeed, it was pointed out during the discussion that the fact that the paper generated so much discussion and follow-up questions is indicative of the interest it will draw if accepted. I therefore believe this work will be quite impactful and recommend acceptance. However, there was also a sense during the discussion that some of the experiments (specifically, those in Section 5) were unclear and potentially even somewhat misleading. For example, when asked to clarify what the variable ‘d’ corresponds to in the provided code, the authors replied that it corresponds to ‘num_unroll_steps’. While the paper states that ‘d’ controls the depth of MCTS, in the code ‘num_unroll_steps’ is actually a parameter governing how the model is trained. It sounds to me like the parameter the authors meant to vary in order to change the depth of search would be `num_simulations`. Similarly, the paper implies that Table 2 presents results with MuZero, but upon clarification, it seems like it is a different algorithm (though it is unclear what). Moreover, as R3 points out in their review, it is also not clear exactly what the 2-step and 5-step models look like or how they were trained. In general, a paper needs to be written so that all of these implementation details and design choices are clear and could be reproduced, and I do not feel Section 5 satisfies this criteria. I don’t believe that the issues with Section 5 detract so much from the paper as to warrant rejection, but as it stands, Section 5 is very unclear and as a result not particularly informative (aside from the main result that MuZero does not achieve zero LoCA regret). I therefore request that the authors rewrite Section 5 to be much clearer for the camera-ready version.