Maximum Likelihood Competitive Learning

Part of Advances in Neural Information Processing Systems 2 (NIPS 1989)

Bibtex Metadata Paper

Authors

Steven Nowlan

Abstract

One popular class of unsupervised algorithms are competitive algo(cid:173) rithms. In the traditional view of competition, only one competitor, the winner, adapts for any given case. I propose to view compet(cid:173) itive adaptation as attempting to fit a blend of simple probability generators (such as gaussians) to a set of data-points. The maxi(cid:173) mum likelihood fit of a model of this type suggests a "softer" form of competition, in which all competitors adapt in proportion to the relative probability that the input came from each competitor. I investigate one application of the soft competitive model, place(cid:173) ment of radial basis function centers for function interpolation, and show that the soft model can give better performance with little additional computational cost.