Bayesian Models of Inductive Generalization

Part of Advances in Neural Information Processing Systems 15 (NIPS 2002)

Bibtex Metadata Paper

Authors

Neville Sanjana, Joshua Tenenbaum

Abstract

We argue that human inductive generalization is best explained in a Bayesian framework, rather than by traditional models based on simi- larity computations. We go beyond previous work on Bayesian concept learning by introducing an unsupervised method for constructing flex- ible hypothesis spaces, and we propose a version of the Bayesian Oc- cam’s razor that trades off priors and likelihoods to prevent under- or over-generalization in these flexible spaces. We analyze two published data sets on inductive reasoning as well as the results of a new behavioral study that we have carried out.