Bounds on the complexity of recurrent neural network implementations of finite state machines

Part of Advances in Neural Information Processing Systems 6 (NIPS 1993)

Bibtex Metadata Paper

Authors

Bill Horne, Don Hush

Abstract

In this paper the efficiency of recurrent neural network implementa(cid:173) tions of m-state finite state machines will be explored. Specifically, it will be shown that the node complexity for the unrestricted case can be bounded above by 0 ( fo) . It will also be shown that the node complexity is 0 (y'm log m) when the weights and thresholds are restricted to the set {-I, I}, and 0 (m) when the fan-in is re(cid:173) stricted to two. Matching lower bounds will be provided for each of these upper bounds assuming that the state of the FSM can be encoded in a subset of the nodes of size rlog m 1.