A Neural Probabilistic Language Model

Part of Advances in Neural Information Processing Systems 13 (NIPS 2000)

Bibtex Metadata Paper

Authors

Yoshua Bengio, Réjean Ducharme, Pascal Vincent

Abstract

A goal of statistical language modeling is to learn the joint probability function of sequences of words. This is intrinsically difficult because of the curse of dimensionality: we propose to fight it with its own weapons. In the proposed approach one learns simultaneously (1) a distributed rep(cid:173) resentation for each word (i.e. a similarity between words) along with (2) the probability function for word sequences, expressed with these repre(cid:173) sentations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar to words forming an already seen sentence. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach very significantly im(cid:173) proves on a state-of-the-art trigram model.