Using Prior Knowledge in a NNPDA to Learn Context-Free Languages

Part of Advances in Neural Information Processing Systems 5 (NIPS 1992)

Bibtex Metadata Paper

Authors

Sreerupa Das, C. Giles, Guo-Zheng Sun

Abstract

Although considerable interest has been shown in language inference and automata induction using recurrent neural networks, success of these models has mostly been limited to regular languages. We have previ(cid:173) ously demonstrated that Neural Network Pushdown Automaton (NNPDA) model is capable of learning deterministic context-free languages (e.g., anbn and parenthesis languages) from examples. However, the learning task is computationally intensive. In this paper we discus some ways in which a priori knowledge about the task and data could be used for efficient learning. We also observe that such knowledge is often an experimental prerequisite for learning nontrivial languages (eg. anbncbmam ).