NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
The paper proposes a variant of the k-Nearest Neighbors algorithm (called adaptive knn) in which $k$ is chosen for each example to classify, instead of being tuned as a global hyperparameter. To do so, the authors define a new notion that applied locally in the input space they call the advantage, instead of the local Lipschitz condition that is often used in such setting. An important contribution of the paper is the prove that the proposed algorithm is consistent and have pointwise convergence at the limit. The proposed notion of advantage is allers related to some error bounds for pointwise convergence. The experimental part is clearly sufficient for this type of paper, even if there is no comparison with other state-of-the-art algorithm. It nevertheless experimentally shows how performance increases when the proposed algorithm is compare with the classical knn. The reviewers and I consider that the theory developed in the paper is elegant and based on innovative ideas -- definitely not incremental. Also, the notion of advantage could be useful in other learning situations. For all these reason, I will recommend acceptation and a short talk. I think that it is in the interest of the community to ear about this idea of “advantage”. However, I highly encourage the authors to incorporate the ideas contained in their rebuttal to the camera ready since it clearly explained most of the concerns the reviewers had with the paper.