NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:5956
Title:Learning Stable Deep Dynamics Models

Reviewer 1


		
The authors present a highly original approach for using layered neural networks to learn the function of a continuous time dynamical system while ensuring that the system is stable. The authors leverage recently introduced "input convex neural networks" (ICNNs) and automatic differentiation to simultaneously learn a Lyapunov function to ensure stability as well as train a neural network to approximate a dynamics function. The work is a creative use of ICNNs and automatic differentiation applied to a broad, general problem and one of great significance (learning stable dynamical systems). The approach appears to be a significant improvement (although see below) over previous approaches (which only ensure stability over the data training set). The method itself is clearly presented, with all of the necessary background provided to clearly understand the method, and all of the presented technical material appears correct. The principle weakness of the paper is the clarity and the completeness of the empirical results. The originality of the approach compensates for this weakness, however, the paper would have undoubtedly been scored even more favorably if these results have been more clear and complete. See below for critiques of the clarity fo the empirical results.

Reviewer 2


		
Originality: The paper describes a formalism to learn lyapunov stable dynamics functions. this is, as acknowledged by the authors, not the first attempt to solve such a problem. However previous methods have either focussed on learning dynamics models linear in embedding space and explicit stability guarantees as a loss. Alternating optimization based methods which use projection constraints have also been used. However this paper proposes a method to learn functions by restricting the function class to lyapunov stable functions with ICNNs -- which is indeed novel and exciting. Overall this would qualify for a novel contribution, of it can be supported amply with experimental evaluation. Quality: The paper presents the methods, formalism and analysis in a convincing manner with sufficient detail. The topic and the contribution is both interesting and non-trivial, even in hindsight. However the results in the experiments section are left a bit wanting. The current results in the paper involve only toy domains, which are not sufficient to convince a reader -- either an expert or a practiioner to adopt this method. Clarity: The paper is written clearly, jumps to the problem statement without much ado and does a good job explaining the background and challenges in the dyanmics learning with stability guarantees. Significance: The results are interesting in particular because of their theoretical value and potential applicability. However the experimental validation is rather insufficient with no baselines.

Reviewer 3


		
The paper presents a method for constructing neural network architectures that have build-in theoretical guarantees of Lyapunov stability - meaning that the equilibrium will be in the origin and for any initial condition, the network will produce trajectories that converge to the equilibrium. The method is evaluated on the N-link pendulum and video generation problems. The method’s significance comes from two different reasons. First, Lyapunov stability for the system is very difficult to prove with classical methods. Second, deep learning methods are largely empirical, without theoretical guarantees, limiting their applicability for life-critical system. This paper presents a method for learning autonomous dynamics that is guaranteed to be Lyapunov stable, without having the classical toolset. This methodology is original and potentially very useful for many applications, beyond classic controls and videos. For example, protein folding, robotics, weather predictions, material design, etc. The quality of the paper overall is good, although it varies. The theoretical potion is solid. The contribution is clear, well-motivated, and structured well. The empirical validation is somewhat lacking in quality. While the authors are commended for exploring two very different domains (classic controls and video generation), the empirical validation is missing some key elements. For example: - It is not clear how the method would perform on a system without equilibrium, or for that matter the link in the upright initial position. - How do the learned and ground truth models perform in the presence of noise? - Details about the training are missing: 1. methodology for gathering the training set; 2. why the convex network has 60 layers (and in the previous example, it contranied 100 neurons per layer); 3. system info is missing in both examples (equations for the pendulum with the damping factor and reference to the video dataset for the video prediction dataset). 4. Figure 5: Over how many initial conditions was the Figure compiled? Please show error bars. In addition, that authors should include and discuss the following related work: Learning Stabilizable Dynamical Systems via Control Contraction Metrics, Singh et al. WAFR 2018 Continuous Action Reinforcement Learning for Control-Affine Systems with Unknown Dynamics, Faust et al, Acta Automatica Sinica, 2014 The presentation of the paper is excellent. The authors make a theoretical paper very easy to read, and logically introduce one step at the time new notation and the elements of the method. Some minor comments: - Lines 59-60: stating that those conditions are sufficient but not necessary is more clear. - Line 113: What differentiable tools. Please cite? - Line 132-133 - Please, either prove or remove the claims. - Line 145: is V is -> if V is - Line 167: The fact the V -> The fact that V - Line 192: angular velocity \theta -> angular velocity \dot \theta - Please go through the math and use standard notations for vectors (vs scalars). Overall, strong potential theoretical result, with lacking supporting evidence as how well it really works in practice. -------------------------------------------------- Update after author response: Thank you for the response and clarifying the points. The paper presents a strong theoretical contribution, and I leave the score unchanged.

Reviewer 4


		
Originality: While learning stable dynamical systems has been studied in previous literature in the context of neural networks, the training of an additional Lyapunov function to guarantee stability of an architecture is a novel contribution. Some recent related work which could be included is that of Neural ODEs (Chen et. al, NeurIPS 2018) and stability with recurrent neural networks - i.e. Stable Recurrent Models (Miller & Hardt, ICLR 2019). Quality: The claims are valid and well-supported by theoretical results and empirical analysis. The authors show that their method provably leads to models that are constrained to be stable. Perhaps one area which could be more thorough is that the empirical evaluation of the method is lacking in the form of baselines. The authors only evaluate against a naive neural network. There are potentially several more intelligent baselines, such as penalizing the Jacobian of the network, or even simply clipping the weights. The authors also refer to prior work on penalized losses for training stable networks, but do not evaluate any of the aforementioned methods. Clarity: The paper is well-written and well-organized. Significance: As mentioned by the authors, learning good, stable dynamics model architectures has many downstream applications in reinforcement learning and sequence modeling tasks.