NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:768
Title:Regression Planning Networks

Reviewer 1


		
Originality: The work proposes a novel method for regression planning via neural networks. Quality: The work is high quality and the authors have spent timing properly analyzing their method Clarity: The work is well written and techniques are well explained. Significance: This work as having potential to be impactful as the technique presented works well and aims at an important problem.

Reviewer 2


		
1. What is the role of learning in the tasks being studied in this paper? The paper shows results on planning problems. If I understand correctly, these problems can be solved exactly if appropriate state estimators are trained (to check on the state of different objects, such as, if the cabbage is cooked or is raw). Thus, to me it is not clear as to what is the role of learning in solving these problems? If the problem could be solved exactly using appropriate classical planners, why should learning be used here at all? The paper argues in its text that symbols need to be hand-defined in classical approaches, but as far as I understand, they have also been largely hand-crafted in the proposed learned approach. It will really help if: a) the authors can provide intuition as to what they hope to learn in the different networks they are learning? b) the authors can provide qualitative and quantitative evidence that the networks are learning something non-trivial? b) provide an explicit experimental comparison to a completely classical planning algorithm to solve the problem at hand? 2. The current experiments in the paper are geared towards showing that learning inside a planner scaffolding is better than without. However, the experimental setup chosen is in some sense biased towards problems that can be solved within a planner scaffolding. There are very clear symbols that need to be manipulated in order to get to the right solution, etc. Learning based solutions may be more attractive in settings where symbols may be fuzzily defined, hard to define, or hard to exactly extract out from raw sensory inputs. It will be great if the authors tried the proposed technique in such experimental settings, such as where data comes from more realistic settings.

Reviewer 3


		
Originality: The idea of integrating symbolic methods and neural networks to 1) learn better representations or 2) perform better search is not new. In this light, I would like to see a brief discussion of End-to-End Differentiable Proving, Rocktäschel and Riedel, 2017; this work also proposes a “neural” approach to perform backward chaining. (The authors *may* also consider going through the notation used in the paper to improve the clarity of the preliminaries section). Having said that, I think the work is still interesting, proposing a neat straightforward algorithm to perform planning. Quality: The paper is technically sound and is reasonably grounded in previous literature. Clarity: The paper is well-written and easy to understand (see below for minor comments on notation and typos) Significance: The work addresses an important challenge — that of building better reasoning systems — by integrating neural and symbolic methods to achieve best of both worlds. Apart from being a reasonable and grounded approach, RPNs also perform well experimentally on unseen goals and so, can be of general interest to the community.