NIPS 2017
Mon Dec 4th through Sat the 9th, 2017 at Long Beach Convention Center
Paper ID: 2407 Ranking Data with Continuous Labels through Oriented Recursive Partitions

### Reviewer 1

The ms tries to solve a problem called continuous ranking, which is an extension of bipartite ranking. The idea of continuous ranking is to find a score function that increase or decrease with output y with highest probability. In bipartite ranking, each data point x is given a binary label y \in {+1, -1}. The goal is to find a score function s(x) such that the difference of p(s(x)>=t|y=+1) and p(s(x) < t|y=-1) is maximal for all threshold t. This can be achieved by maximizing AUC of the score function. The ms extends the bipartite ranking idea for continuous output Y, by introducing a threshold y to separate the data into two ordered parts. By recursively divide the two ordered parts into further ordered two parts, we constructed an ordered binary tree, which provides a continuous ranking on the leaf nodes. The ms requires the user to provide score function s(x) and distribution of output Y. If distribution of Y is not known, then it is approximated by empirical data distribution of the training data. Then we can calculate the IAUC for the given score function and output distribution. The ms proves that the IAUC could be used to evaluate the score function s(x) in the sense of continuous ranking. The major contribution of the ms is from theoretical side. However, my limited knowledge could not help me to fully understand the theoretical novelty. I will try to comment more from the practical side. The ms emphasized the differences between continuous ranking and regression (Figure 2). In continuous ranking, we optimize IAUC, in regression we minimize mean square error. It is a bit confusing to understand "regression function" since on page 4, section 3.2, proposition 1, point 3, the regression function is an optimal scoring rule. The numerical experiment uses specially designed data. The generating function z²(z+1)(z+1.5)(z+2) is uncommon in real datasets and there is no noise. The oscillation that causes problems in Figure 3b looks so small that I worry it will be easily dominated by noise. It will be nice to see some results for real applications.

### Reviewer 2

This paper generalizes bi/multi-partite ranking problem and uses the pair (x, y), where x is the feature vector and y is a continuous real-valued label not the discrete label, to find optimal scoring function s(x). The existence of the optimal scoring rules for continuous ranking is given by Proposition 1. A dedicated functional criterion, called the IROC curve here or the maximization of the Kendall \tau related to the pair (s(x), y) are used as the performance measures. A recursive statistical learning algorithm which tailored to empirical IROC curve optimization are presented. An oriented binary tree can be used as piecewise constant scoring function for continuous ranking. My majority concern about this paper is the motivation of the continuous ranking is not well described. The authors didn’t point out the disadvantage of bi/multi-partite ranking and why the generalization of continuous ranking is meaningful. As the discrete binary label is used as the measurement is hard to obtain and only the relevance or not is indicated by y, the continuous real-valued label can be used for ranking itself. The author summarize the potential applications as quality control and chemistry but these scenario are also suitable to bi/multi-partite ranking. This paper need a good example to display the difference between the continuous ranking and the bi/multi-partite ranking. The author should point out the proof hardness of Proposition 1 & 2 and Theorem 1. The proofs looks like a trivial variant of the corresponding part of bi/multi-partite ranking. I think the authors should cite the bi/multi-partite ranking papers and AUC criterion paper.  Stéphan Clémençon, Marine Depecker, Nicolas Vayatis: Ranking forests. Journal of Machine Learning Research 14(1): 39-73 (2013).  Aditya Krishna Menon, Robert C. Williamson: Bipartite Ranking: a Risk-Theoretic Perspective Journal of Machine Learning Research 17(195):1−102 (2016).  Corinna Cortes, Mehryar Mohri: AUC Optimization vs. Error Rate Minimization. NIPS: 313-320 (2003).

### Reviewer 3

The paper studied the problem of continuous ranking, which is a generalization the bi-partite/multi-partite ranking problem in that the ranking label Y is a continuous real-valued random variable. Mathematical formulation of the problem is given. The problem of continuous ranking considered as multiple sub-problems, each corresponds to a bipartite ranking problem. Necessary conditions for the existence of optimal solutions are given. The IROC (and IAUC) measures are proposed for evaluating the performances of scoring functions. Finally, a continuous ranking algorithm called Crank is proposed. Experimental results on toy data showed the effectiveness of the proposed Crank algorithm. The problem investigated in the paper is very interesting and important for learning to rank and related areas. The paper is well written. The proposed framework, measures, and algorithms are clearly presented. The empirical evaluation of the paper is weak, as it is based on toy data and weak baselines. Pros. 1. Generalizes the bipartite/multi-partite ranking problems to the continuous ranking problem. Rigorous formulation of the problem and strong theoretical results. 2. Extending the conventional ROC to IROC for measuring the continuous ranking functions. 3. Proposing Crank algorithm for conducting continuous ranking. Cons. 1. The authors claim that “theoretical analysis of this algorithm and its connection with approximation of IROC* are beyond the scope of this paper and will be the subject of a future work”. I am not convinced that “its connections with approximation of IROC* are beyond the scope of this paper”. As to my understanding of the paper, the goal of proposing IROC is to guide the proposal of new continuous ranking algorithms. Thus, it is very important to build the connections. Otherwise, the performances of the Crank algorithm cannot reflect the effectiveness of the proposed theoretical framework. 2. The experiments in the paper are really weak. The proposed Crank algorithm is tested based on a toy data. The authors conclude in the last section “… finds many applications (e.g., quality control, chemistry)…”. It is necessary to test the proposed framework and solutions on real problems. The baselines are CART and Kendall, which are not designed for the ranking problem. It is better to compare the algorithm with state-of-the-art bipartite and multi-partite ranking models. The generation of the toy examples, the setting of parameters are not given, which makes it is hard to reproduce the results. 3. The literature survey of the paper is not enough. Since the continuous ranking problem is a generalization of the bipartite ranking and multi-partite ranking problems, it is better if the authors could use a section to analyze the advantages and disadvantages of existing ranking models, and their real applications. Currently, only a few references are listed in Section 2.2. 4. Minor issue: Line 52: Kendall tau  Kendall $\tau$; Line 66: F^{-1}(u) = inf{s\in R: F(s}>= t), the last t should be u;