Large Margin Component Analysis

Part of Advances in Neural Information Processing Systems 19 (NIPS 2006)

Bibtex Metadata Paper

Authors

Lorenzo Torresani, Kuang-chih Lee

Abstract

Metric learning has been shown to significantly improve the accuracy of k-nearest neighbor (kNN) classification. In problems involving thousands of features, dis- tance learning algorithms cannot be used due to overfitting and high computa- tional complexity. In such cases, previous work has relied on a two-step solution: first apply dimensionality reduction methods to the data, and then learn a met- ric in the resulting low-dimensional subspace. In this paper we show that better classification performance can be achieved by unifying the objectives of dimen- sionality reduction and metric learning. We propose a method that solves for the low-dimensional projection of the inputs, which minimizes a metric objective aimed at separating points in different classes by a large margin. This projection is defined by a significantly smaller number of parameters than metrics learned in input space, and thus our optimization reduces the risks of overfitting. Theory and results are presented for both a linear as well as a kernelized version of the algorithm. Overall, we achieve classification rates similar, and in several cases superior, to those of support vector machines.