Evolution and Learning in Neural Networks: The Number and Distribution of Learning Trials Affect the Rate of Evolution

Part of Advances in Neural Information Processing Systems 3 (NIPS 1990)

Bibtex Metadata Paper


Ron Keesing, David Stork



*Dept. of Electrical Engineering

Stanford University Stanford, CA 94305


Learning can increase the rate of evolution of a population of biological organisms (the Baldwin effect). Our simulations show that in a population of artificial neural networks solving a pattern recognition problem, no learning or too much learning leads to slow evolution of the genes whereas an intermediate amount is optimal. Moreover, for a given total number of training presentations, fastest evoution occurs if different individuals within each generation receive different numbers of presentations, rather than equal numbers. Because genetic algorithms (GAs) help avoid local minima in energy functions, our hybrid learning-GA systems can be applied successfully to complex, high(cid:173) dimensional pattern recognition problems.