NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:742
Title:Envy-Free Classification

Reviewer 1


		
The paper introduces a novel notion of fairness. This new notion of fairness and its motivation are clearly explained; in particular, the paper provides a clear explanation of the settings in which this notion is applicable, including a honest discussion of when it is NOT applicable (this reviewer wishes more papers in the field did this). This fairness definition is well-motivated and elegant, and it introduces a conceptually novel perspective to ML fairness. To me, it seems likely that this work will have a significant influence on future research in the ML fairness field. The paper also provides a theoretical analysis of envy-free classification, focusing on generalization results. These results are elegant and nontrivial. The paper also includes preliminary results about the computational problem of performing envy-free ERM. The proposed heuristic is nontrivial, but considerably less elegant than the theoretical results. The experimental results give evidence that the proposed heuristic performs reasonably well, but do not compare the proposed methods to any nontrivial baselines (the only comparisons are to the optimal and the random classifiers). The paper is very well written overall, even if sometimes it seems to use more notation than necessary. (E.g., the paragraph starting with "Formally," in Section 4 does not seem to provide much value; the intuitive explanation in words is completely clear and much easier to read than the formal definition.) Overall, this is a very strong paper that should be of interest to the entire ML fairness community.

Reviewer 2


		
The paper is well written and approaches the problem methodically. This reviewer thought the mathematical formulation well approximates intuitive notions of envy-free classification. That being said, what is missing are concrete suggestions for utility and loss functions. The example provided in the Appendix is useful to motivate randomized classifiers but no example is provided overall to highlight the broader social value of envy-free classification. Theorem 1 is a nice result which initiates the discussion of generalization to the true distribution. Theorem 2 is a very nice result and the covering/probabilistic argument is well explained in the proof. While the reviewer understands that Theorem 2 is provided as an illustrative result, the reviewer suggests the minor adjustment of abstracting the side length rather than fixing it to 1/4 and not fixing the epsilon to 1/10. Thus, upon glancing at the result, a reader could see the dependence between the number of samples, the probabilistic term, and epsilon. As it stands, the result is murky at first glance. The reviewer likes the use of the Natarajan dimension in the way of Theorem 3. It is a good result which points the direction to where future research on this topic should go. The proof of Theorem 3 is technical and well-executed. Lemma 4 extends known results about Natarajan dimension and should be of independent interest. This reviewer has the most concerns about the empirical section of the paper. The authors attempt to illustrate their results on synthetic data so as to reverse engineer the randomizing classifier given the optimal envy free result on linear one-vs-all classifiers but (while serving as a simple example to Theorem 3) it is too contrived to raise concerns about broader applicability. The method appears to yields good results, however, the reviewer takes issue with how the dataset was generated. While the reviewer understands that the model is not as easily tested by using existing datasets, the reviewer would have liked to see a practical result which specifies envy classifiers for a particular family of distributions. Then, apply the algorithm on a dataset which is sampled from this distribution. As it stands, the empirical methodology fixes a particular envy-free classifier and creates a dataset around this classifier. The reviewer feels that this is slightly backward and does not provide a good example of the method in practice. In particular, the following details are opaque: (i) when the authors say, "overall *** is envy-free", do they mean on average? This isn't clear at all and could be clarified right up front even as they argue for randomizing classifiers. (ii) What is most concerning in section 5.3 is, how exactly is envy computed? For every pair and then averaged over all pairs (for the optimal $\eta$?) (iii) In lines 318-320, are the authors referring to $\alpha$ and $\beta$ by providing the fractions? This is entirely unclear. (iv) lastly, figures 3 and 4 could have much more improved legends -- surely, eta can be written as a symbol; the CDF plot isn't very clear either in terms of how it was obtained. Finally, a few more comments: 1) the authors suggest utility could be easily identified in applications but don't do so easily in an example. This reduces the charm of their elegant technical results. minor: in theorem 1, there appears to be an extra bracket following the sup_x\in\mathcal{X} which isn't needed. Nevertheless, on the whole, the paper is well written and the results are compelling.

Reviewer 3


		
The paper offers an interesting, original notion of envy-free fairness that is well-studied in other disciplines such as sociology, psychology and economics, and applying it to machine learning. The paper is well-written and easy to follow. It offers nice technical results on generalization for simple classifiers given the fairness constraint as (a,b) envy-freeness. While the paper offers a new notion of fairness, it does not discuss in details its justification and relationship with existing studies in other disciplines, but rather taking it as a premise of the paper. It also makes a very strong assumption of the existence of a particular form of utility function u(x, h(x)). Both of the two choices undermine the validity and usefulness of the results in the paper.