Rules and Similarity in Concept Learning

Part of Advances in Neural Information Processing Systems 12 (NIPS 1999)

Bibtex Metadata Paper

Authors

Joshua Tenenbaum

Abstract

This paper argues that two apparently distinct modes of generalizing con(cid:173) cepts - abstracting rules and computing similarity to exemplars - should both be seen as special cases of a more general Bayesian learning frame(cid:173) work. Bayes explains the specific workings of these two modes - which rules are abstracted, how similarity is measured - as well as why gener(cid:173) alization should appear rule- or similarity-based in different situations. This analysis also suggests why the rules/similarity distinction, even if not computationally fundamental, may still be useful at the algorithmic level as part of a principled approximation to fully Bayesian learning.