Analytical Study of the Interplay between Architecture and Predictability

Part of Advances in Neural Information Processing Systems 10 (NIPS 1997)

Bibtex Metadata Paper

Authors

Avner Priel, Ido Kanter, David Kessler

Abstract

We study model feed forward networks as time series predictors in the stationary limit. The focus is on complex, yet non-chaotic, behavior. The main question we address is whether the asymptotic behavior is governed by the architecture, regardless the details of the weights . We find hierarchies among classes of architectures with respect to the attract or dimension of the long term sequence they are capable of generating; larger number of hidden units can generate higher dimensional attractors. In the case of a perceptron, we develop the stationary solution for general weights, and show that the flow is typically one dimensional. The relaxation time from an arbitrary initial condition to the stationary solution is found to scale linearly with the size of the network. In multilayer networks, the number of hidden units gives bounds on the number and dimension of the possible attractors. We conclude that long term prediction (in the non-chaotic regime) with such models is governed by attractor dynamics related to the architecture.

Neural networks provide an important tool as model free estimators for the solution of problems when the real model is unknown, or weakly known. In the last decade there has been a growing interest in the application of such tools in the area of time series prediction (see Weigand and Gershenfeld, 1994). In this paper we analyse a typical class of architectures used in this field, i.e. a feed forward network governed by the following dynamic rule: