NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:4734
Title:Unlocking Fairness: a Trade-off Revisited

Reviewer 1


		
-The proposed objective function sounds like a reasonable idea except that the unlabel data is assumed fair. This seems like a critical limitation of the proposition. - It is not obvious how to gather unlabeled features having the law of D^’, which is essential to use the semi-supervised learning algorithm. In practice, testing the fairness condition for real data can be more helpful. - All experiments were conducted using synthetic data, and there is no analytics for real datasets. - Used examples are not good and somewhat confusing. For example, in NLP, the difference in modern past and present data can be caused by time-varying distribution. However, the authors regarded this selection bias only. - To strengthen the paper, many measures of AI fairness should be considered. -This paper does not provide downloadable software Minor points: - Page 3, Background, line 5: Maybe replacing ‘protected dimensions’ by ‘protected attributes’ is better. - Page 4, formula (1): p(y|x) -> p(y|x,w) - Page 6, Data generation, line 13: Where -> where - There are no captions to the Figures 1 and 2, and the example to illustrate a hypothetical situation is very difficult to follow. (-) Downloadable source code is provided but a link to a downloadable version of the dataset or simulation environment is not provided.

Reviewer 2


		
The premise of the paper is that many other papers in the ML fairness literature assume that there is an inevitable trade-off between fairness and accuracy, often without adequate justification for this assumption, and that many papers either assume that the data itself is unbiased or at least do not explicitly their assumptions about the types of bias in the data. I am not fully convinced that the paper's characterization of previous work is accurate. In my view, most fairness papers typically work in one of the following two regimes: 1. The setting in which the learner has access to fair, correct ground truth. In this case, there is clearly no trade-off between fairness and accuracy; if the training and test data are fair, a perfectly accurate classifier would also be perfectly fair. (Some papers that study this setting even state this observation explicitly.) In this setting, most papers (that I am aware of) study whether fairness can also be ensured for imperfect classifiers, since training a perfect classifier is typically not feasible in practice. For instance, they would compare some error metrics (e.g. false positive/negative rates) for different subpopulations. 2. The setting in which the learner does NOT have access to fair, correct ground truth; i.e., the available data is biased, perhaps reflecting historical discrimination or contemporary biases of the people or processes that generated the data. In this setting, there would still be no trade-off between fairness and *true* accuracy, but since the data is assumed *not* to reflect the ideal fair truth, there is a trade-off between fairness and accuracy *as measured on incorrect and biased data.* I agree with the authors that there are many ML papers that don't state their assumptions as clearly as they should. However, many prominent papers in the field do. Also, this paper seems to attribute a confusion of the two regimes I described above to previous work, by saying that "most theoretical work in fair machine learning considers the regime in which neither [the data distribution nor the labeling function] is biased, and most empirical work [...] draws conclusions assuming the regime in which neither is biased" (i.e., the first regime), but also claiming that the most papers assume a trade-off between fairness and ostensible accuracy (i.e., the second regime). Papers that I am familiar with don't confuse the two regimes; they clearly operate in only one of the two, even when they don't make this as clear and explicit as they ideally should. Also, several papers do state these assumptions explicitly, including stating that they assume label bias and the reduction in accuracy for their proposed fair ML methods is due to the fact that they are evaluated on imperfect, biased data. Since the authors talk about the "prevailing wisdom," "most work," etc. in their paper, it's difficult to evaluate whether it's their assessment of the fairness literature or mine that is more accurate; however, I am concerned that they might be attacking a straw man. One of the main contributions of this paper is a set of experiments using synthetic data simulating the second setting above, including synthetically generated bias, so that trained models can be evaluated both on the biased data (that a realistic learner would have access to) and the unbiased ground truth (that in practice is not available to the learner). The analysis is more detailed than in previous work, but the idea is less novel than the paper makes it appear. In the related work section, right after quoting Fish et al., the authors state: "in contrast, we are able to successfully control for label bias because we simulate data ..." Presumably, the "in contrast" refers at least in part to a purported contrast with Fish et al., which is incorrect. In fact, one of the contributions of that cited paper is a method for evaluating fair ML methods on simulated label bias, which is conceptually very similar to what this paper does. The authors don't seem to acknowledge this closely related prior work anywhere in their paper. The other main contribution of the paper is a new, semi-supervised method for fair learning (where the fairness notion used is statistical parity). This fair learning method is essentially ERM with an added unfairness penalty term; from this perspective, the method is very similar to many previously proposed variants of the same idea of using a fairness regularizer. However, the way the authors use unlabeled data for this goal is novel and interesting. The authors provide experimental results with an interesting detailed analysis. However, unfortunately they only compare their own baselines and one previous method from the literature. Given the large number of proposed fair ML methods, many of which operate in a similar setting as this paper and optimize for similar goals, it is disappointing that the authors don't give a more comprehensive comparison with a larger number of previous methods. Finally, a note about language. I appreciate that the authors attempted to write their paper in a more eloquent language than the typical bland and unnatural English that most research papers use. However, given the broad audience of these papers from many languages and cultures, there is sometimes a good reason for using the lowest common denominator of English. Some of the flowery phrases of this paper ("torrents of ink have been spilled") seem unnecessary. Also, the use foreign phrases when there is a simple English equivalent is not effective writing, and serves no practical purpose other than displaying the authors' erudition. For instance, I don't see any reason to use "cum grano salis" instead of the well-known English equivalent of "with a grain of salt." The same holds for "dernier cri," etc. Overall, the results are interesting and the paper is well written, but the main ideas of the paper (adding an unfairness penalty to ERM, evaluating models on synthetic bias) are not entirely novel, and the paper's engagement with previous work needs improvement. Some more minor, detailed comments: - The name of "empirical fairness maximization" for the proposed method seems a little misleading; in reality, the method still minimized empirical risk, with an unfairness penalty term.  "Fair ERM" would be a more precise name, and is a term that previous papers used for similar methods.- In the data generation, it is unclear why the authors decided to generate such a small data set. They explain why they wanted n >> k but an n still small enough to make learning challenging; however, they do not explain why they couldn't have had more features and a larger sample while preserving the relationship between n and k. The paper's data set size and dimensionality are very small compared to typical modern applications.- In the footnote on page 6, the authors mention that instead of accuracy, they use F1 to measure the quality of their models due to the class imbalance in the data. However, since the authors control the data generation process, it's unclear why they couldn't have produced data with balanced labels. - AFTER AUTHOR RESPONSE: I thank the authors for their thoughtful response to my concerns. I acknowledge that I might have underestimated the confusion about label bias in the community; clarifying the assumptions about bias in fair ML is a valuable contribution. I updated my overall score for the submission.

Reviewer 3


		
This paper provides a very compelling reframing of the common understanding that fairness and accuracy must be traded-off in any fairness-aware algorithm. Instead, this paper convincingly argues that fairness and accuracy can be aligned depending on assumptions about skew in labels and training data selection. These assumptions are clearly articulated, and experiments are provided for varying amounts of skew. This first contribution - the reframing of the fairness / accuracy tradeoff - is a critical one for the fair-ML literature and has the potential to be highly impactful within the subfield, especially given the strength of the articulation of the problem in the first few sections of this paper. Specifically, the paper argues that label bias and selection bias in the training set can lead to unfairness in a way that simultaneously decreases accuracy. The second main contribution of the paper is a fair semi-supervised learning approach. This approach builds on the discussion of the tradeoff by assuming that the labels should be distributed according to a “we’re all equal” assumption. Given this assumption, the SSL technique is a fairly straightforward addition of a term to the loss function. Experiments are then included based on synthetic data generated via a Bayesian network (see Supp Info) to allow for control of the amount of label and selection bias in the resulting data. The resulting experiments show that in the case of label bias, the traditionally accepted fairness / accuracy tradeoff does not apply, and increasing fairness increases accuracy under the stated assumptions. The semi-supervised approach does a good job of achieving both fairness and accuracy.