Convergence and Consistency of Regularized Boosting Algorithms with Stationary B-Mixing Observations

Part of Advances in Neural Information Processing Systems 18 (NIPS 2005)

Bibtex Metadata Paper

Authors

Aurelie C. Lozano, Sanjeev Kulkarni, Robert E. Schapire

Abstract

We study the statistical convergence and consistency of regularized Boosting methods, where the samples are not independent and identi- cally distributed (i.i.d.) but come from empirical processes of stationary β-mixing sequences. Utilizing a technique that constructs a sequence of independent blocks close in distribution to the original samples, we prove the consistency of the composite classifiers resulting from a regulariza- tion achieved by restricting the 1-norm of the base classifiers’ weights. When compared to the i.i.d. case, the nature of sampling manifests in the consistency result only through generalization of the original condition on the growth of the regularization parameter.