Part of Advances in Neural Information Processing Systems 20 (NIPS 2007)
Tony Jebara, Yingbo Song, Kapil Thadani
A method is proposed for semiparametric estimation where parametric and non- parametric criteria are exploited in density estimation and unsupervised learning. This is accomplished by making sampling assumptions on a dataset that smoothly interpolate between the extreme of independently distributed (or id) sample data (as in nonparametric kernel density estimators) to the extreme of independent identically distributed (or iid) sample data. This article makes independent simi- larly distributed (or isd) sampling assumptions and interpolates between these two using a scalar parameter. The parameter controls a Bhattacharyya afﬁnity penalty between pairs of distributions on samples. Surprisingly, the isd method maintains certain consistency and unimodality properties akin to maximum likelihood esti- mation. The proposed isd scheme is an alternative for handling nonstationarity in data without making drastic hidden variable assumptions which often make esti- mation difﬁcult and laden with local optima. Experiments in density estimation on a variety of datasets conﬁrm the value of isd over iid estimation, id estimation and mixture modeling.