Categorization Under Complexity: A Unified MDL Account of Human Learning of Regular and Irregular Categories

Part of Advances in Neural Information Processing Systems 15 (NIPS 2002)

Bibtex Metadata Paper

Authors

David Fass, Jacob Feldman

Abstract

We present an account of human concept learning-that is, learning of categories from examples-based on the principle of minimum descrip(cid:173) tion length (MDL). In support of this theory, we tested a wide range of two-dimensional concept types, including both regular (simple) and highly irregular (complex) structures, and found the MDL theory to give a good account of subjects' performance. This suggests that the intrin(cid:173) sic complexity of a concept (that is, its description -length) systematically influences its leamability.

1- The Structure of Categories

A number of different principles have been advanced to explain the manner in which hu(cid:173) mans learn to categorize objects. It has been variously suggested that the underlying prin(cid:173) ciple might be the similarity structure of objects [1], the manipulability of decision bound~ aries [2], or Bayesian inference [3][4]. While many of these theories are mathematically well-grounded and have been successful in explaining a range of experimental findings, they have commonly only been tested on a narrow collection of concept types similar to the simple unimodal categories of Figure 1(a-e).