Part of Advances in Neural Information Processing Systems 1 (NIPS 1988)
Hervé Bourlard, C. J. Wellekens
Hidden Markov models are widely used for automatic speech recog(cid:173) nition. They inherently incorporate the sequential character of the speech signal and are statistically trained. However, the a-priori choice of the model topology limits their flexibility. Another draw(cid:173) back of these models is their weak discriminating power. Multilayer perceptrons are now promising tools in the connectionist approach for classification problems and have already been successfully tested on speech recognition problems. However, the sequential nature of the speech signal remains difficult to handle in that kind of ma(cid:173) chine. In this paper, a discriminant hidden Markov model is de(cid:173) fined and it is shown how a particular multilayer perceptron with contextual and extra feedback input units can be considered as a general form of such Markov models.