Green's Function Method for Fast On-Line Learning Algorithm of Recurrent Neural Networks

Part of Advances in Neural Information Processing Systems 4 (NIPS 1991)

Bibtex Metadata Paper

Authors

Guo-Zheng Sun, Hsing-Hen Chen, Yee-Chun Lee

Abstract

The two well known learning algorithms of recurrent neural networks are the back-propagation (Rumelhart & el al., Werbos) and the forward propa(cid:173) gation (Williams and Zipser). The main drawback of back-propagation is its off-line backward path in time for error cumulation. This violates the on-line requirement in many practical applications. Although the forward propaga(cid:173) tion algorithm can be used in an on-line manner, the annoying drawback is the heavy computation load required to update the high dimensional sensitiv(cid:173) ity matrix (0( fir) operations for each time step). Therefore, to develop a fast forward algorithm is a challenging task. In this paper w~ proposed a forward learning algorithm which is one order faster (only 0(fV3) operations for each time step) than the sensitivity matrix algorithm. The basic idea is that instead of integrating the high dimensional sensitivity dynamic equation we solve forward in time for its Green's function to avoid the redundant computations, and then update the weights whenever the error is to be corrected.

A Numerical example for classifying state trajectories using a recurrent

network is presented. It substantiated the faster speed of the proposed algo(cid:173) rithm than the Williams and Zipser's algorithm.