Best-First Model Merging for Dynamic Learning and Recognition

Part of Advances in Neural Information Processing Systems 4 (NIPS 1991)

Bibtex Metadata Paper

Authors

Stephen Omohundro

Abstract

"Best-first model merging" is a general technique for dynamically choosing the structure of a neural or related architecture while avoid(cid:173) ing overfitting. It is applicable to both leaming and recognition tasks and often generalizes significantly better than fixed structures. We dem(cid:173) onstrate the approach applied to the tasks of choosing radial basis func(cid:173) tions for function learning, choosing local affine models for curve and constraint surface modelling, and choosing the structure of a balltree or bumptree to maximize efficiency of access.

1 TOWARD MORE COGNITIVE LEARNING Standard backpropagation neural networks learn in a way which appears to be quite differ(cid:173) ent from human leaming. Viewed as a cognitive system, a standard network always main(cid:173) tains a complete model of its domain. This model is mostly wrong initially, but gets gradually better and better as data appears. The net deals with all data in much the same way and has no representation for the strength of evidence behind a certain conclusion. The network architecture is usually chosen before any data is seen and the processing is much the same in the early phases of learning as in the late phases. Human and animalleaming appears to proceed in quite a different manner. When an organ(cid:173) ism has not had many experiences in a domain of importance to it, each individual experi(cid:173) ence is critical. Rather than use such an experience to slightly modify the parameters of a global model, a better strategy is to remember the experience in detail. Early in learning. an organism doesn't know which features of an experience are important unless it has a strong