{"title": "From Parity to Preference-based Notions of Fairness in Classification", "book": "Advances in Neural Information Processing Systems", "page_first": 229, "page_last": 239, "abstract": "The adoption of automated, data-driven decision making in an ever expanding range of applications has raised concerns about its potential unfairness towards certain social groups. In this context, a number of recent studies have focused on defining, detecting, and removing unfairness from data-driven decision systems. However, the existing notions of fairness, based on parity (equality) in treatment or outcomes for different social groups, tend to be quite stringent, limiting the overall decision making accuracy. In this paper, we draw inspiration from the fair-division and envy-freeness literature in economics and game theory and propose preference-based notions of fairness -- given the choice between various sets of decision treatments or outcomes, any group of users would collectively prefer its treatment or outcomes, regardless of the (dis)parity as compared to the other groups. Then, we introduce tractable proxies to design margin-based classifiers that satisfy these preference-based notions of fairness. Finally, we experiment with a variety of synthetic and real-world datasets and show that preference-based fairness allows for greater decision accuracy than parity-based fairness.", "full_text": "From Parity to Preference-based Notions\n\nof Fairness in Classi\ufb01cation\n\nMuhammad Bilal Zafar\n\nMPI-SWS\n\nmzafar@mpi-sws.org\n\nIsabel Valera\n\nMPI-IS\n\nisabel.valera@tue.mpg.de\n\nManuel Gomez Rodriguez\n\nMPI-SWS\n\nmanuelgr@mpi-sws.org\n\nKrishna P. Gummadi\n\nMPI-SWS\n\ngummadi@mpi-sws.org\n\nUniversity of Cambridge & Alan Turing Institute\n\nAdrian Weller\n\naw665@cam.ac.uk\n\nAbstract\n\nThe adoption of automated, data-driven decision making in an ever expanding\nrange of applications has raised concerns about its potential unfairness towards\ncertain social groups. In this context, a number of recent studies have focused on\nde\ufb01ning, detecting, and removing unfairness from data-driven decision systems.\nHowever, the existing notions of fairness, based on parity (equality) in treatment\nor outcomes for different social groups, tend to be quite stringent, limiting the\noverall decision making accuracy. In this paper, we draw inspiration from the fair-\ndivision and envy-freeness literature in economics and game theory and propose\npreference-based notions of fairness\u2014given the choice between various sets of\ndecision treatments or outcomes, any group of users would collectively prefer its\ntreatment or outcomes, regardless of the (dis)parity as compared to the other groups.\nThen, we introduce tractable proxies to design margin-based classi\ufb01ers that satisfy\nthese preference-based notions of fairness. Finally, we experiment with a variety\nof synthetic and real-world datasets and show that preference-based fairness allows\nfor greater decision accuracy than parity-based fairness.\n\n1\n\nIntroduction\n\nAs machine learning is increasingly being used to automate decision making in domains that affect\nhuman lives (e.g., credit ratings, housing allocation, recidivism risk prediction), there are growing\nconcerns about the potential for unfairness in such algorithmic decisions [23, 25]. A \ufb02urry of recent\nresearch on fair learning has focused on de\ufb01ning appropriate notions of fairness and then designing\nmechanisms to ensure fairness in automated decision making [12, 14, 18, 19, 20, 21, 28, 32, 33, 34].\nExisting notions of fairness in the machine learning literature are largely inspired by the concept of\ndiscrimination in social sciences and law. These notions call for parity (i.e., equality) in treatment,\nin impact, or both. To ensure parity in treatment (or treatment parity), decision making systems need\nto avoid using users\u2019 sensitive attribute information, i.e., avoid using the membership information in\nsocially salient groups (e.g., gender, race), which are protected by anti-discrimination laws [4, 10]. As\na result, the use of group-conditional decision making systems is often prohibited. To ensure parity in\nimpact (or impact parity), decision making systems need to avoid disparity in the fraction of users\nbelonging to different sensitive attribute groups (e.g., men, women) that receive bene\ufb01cial decision\noutcomes. A number of learning mechanisms have been proposed to achieve parity in treatment [24],\n\nAn open-source code implementation of our scheme is available at: http://fate-computing.mpi-sws.org/\n\n31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.\n\n\ff2\n\n-ve\n\n+ve\n\nf2\n\nf2\n\n-ve\n\n+ve\n\nM (100)\n\nW (100)\nM (200)\n\nW (200)\n\nM (100)\n\nW (100)\nM (200)\n\nW (200)\n\nf1\nBene\ufb01t: 0% (M), 67% (W)\n\nAcc: 0.83 \n\nf1\nBene\ufb01t: 22% (M), 22% (W)\n\nAcc: 0.72 \n\nM (100)\n\nW (100)\nM (200)\n\n+ve\n-ve\n\nW (200)\n\nf1\nBene\ufb01t: 33% (M), 67% (W)\n\nAcc: 1.00 \n\n+ve\n-ve\n\nFigure 1: A \ufb01ctitious decision making scenario involving two groups: men (M) and women (W). Feature f1\n(x-axis) is highly predictive for women whereas f2 (y-axis) is highly predictive for men. Green (red) quadrants\ndenote the positive (negative) class. Within each quadrant, the points are distributed uniformly and the numbers\nin parenthesis denote the number of subjects in that quadrant. The left panel shows the optimal classi\ufb01er\nsatisfying parity in treatment. This classi\ufb01er leads to all the men getting classi\ufb01ed as negative. The middle\npanel shows the optimal classi\ufb01er satisfying parity in impact (in addition to parity in treatment). This classi\ufb01er\nachieves impact parity by misclassifying women from positive class into negative class, and in the process,\nincurs a signi\ufb01cant cost in terms of accuracy. The right panel shows a classi\ufb01er consisting of group-conditional\nclassi\ufb01ers for men (purple) and women (blue). Both the classi\ufb01ers satisfy the preferred treatment criterion since\nfor each group, adopting the other group\u2019s classi\ufb01er would lead to a smaller fraction of bene\ufb01cial outcomes.\nAdditionally, this group-conditional classi\ufb01er is also a preferred impact classi\ufb01er since both groups get more\nbene\ufb01t as compared to the impact parity classi\ufb01er. The overall accuracy is better than the parity classi\ufb01ers.\n\nparity in impact [7, 18, 21] or both [12, 14, 17, 20, 32, 33, 34]. However, these mechanisms pay a\nsigni\ufb01cant cost in terms of the accuracy (or utility) of their predictions. In fact, there exist some\ninherent tradeoffs (both theoretical and empirical) between achieving high prediction accuracy and\nsatisfying treatment and / or impact parity [9, 11, 15, 22].\nIn this work, we introduce, formalize and evaluate new notions of fairness that are inspired by the\nconcepts of fair division and envy-freeness in economics and game theory [5, 26, 31]. Our work\nis motivated by the observation that, in certain decision making scenarios, the existing parity-based\nfairness notions may be too stringent, precluding more accurate decisions, which may also be desired\nby every sensitive attribute group. To relax these parity-based notions, we introduce the concept of a\nuser group\u2019s preference for being assigned one set of decision outcomes over another. Given the\nchoice between various sets of decision outcomes, any group of users would collectively prefer the\nset that contains the largest fraction (or the greatest number) of bene\ufb01cial decision outcomes for that\ngroup.1 More speci\ufb01cally, our new preference-based notions of fairness, which we formally de\ufb01ne in\nthe next section, use the concept of user group\u2019s preference as follows:\n\u2014 From Parity Treatment to Preferred Treatment: To offer preferred treatment, a decision making\nsystem should ensure that every sensitive attribute group (e.g., men and women) prefers the set of\ndecisions they receive over the set of decisions they would have received had they collectively\npresented themselves to the system as members of a different sensitive group.\nThe preferred treatment criterion represents a relaxation of treatment parity. That is, every decision\nmaking system that achieves treatment parity also satis\ufb01es the preferred treatment condition, which\nimplies (in theory) that the optimal decision accuracy that can be achieved under the preferred\ntreatment condition is at least as high as the one achieved under treatment parity. Additionally,\npreferred treatment allows group-conditional decision making (not allowed by treatment parity),\nwhich is necessary to achieve high decision accuracy in scenarios when the predictive power of\nfeatures varies greatly between different sensitive user groups [13], as shown in Figure 1.\nWhile preferred treatment is a looser notion of fairness than treatment parity, it retains a core fairness\nproperty embodied in treatment parity, namely, envy-freeness at the level of user groups. Under\npreferred treatment, no group of users (e.g., men or women, blacks or whites) would feel that they\nwould be collectively better off by switching their group membership (e.g., gender, race). Thus,\n\n1Although it is quite possible that certain individuals from the group may not prefer the set that maximizes the bene\ufb01t for the group as a\n\nwhole.\n\n2\n\n \fpreferred treatment decision making, despite allowing group-conditional decision making, is not\nvulnerable to being characterized as \u201creverse discrimination\u201d against, or \"af\ufb01rmative action\u201d for\ncertain groups.\n\u2014 From Parity Impact to Preferred Impact: To offer preferred impact, a decision making system\nneeds to ensure that every sensitive attribute group (e.g., men and women) prefers the set of decisions\nthey receive over the set of decisions they would have received under the criterion of impact parity.\nThe preferred impact criterion represents a relaxation of impact parity. That is, every decision making\nsystem that achieves impact parity also satis\ufb01es the preferred impact condition, which implies (in\ntheory) that the optimal decision accuracy that can be achieved under the preferred impact condition\nis at least as high as the one achieved under impact parity. Additionally, preferred impact allows\ndisparity in bene\ufb01ts received by different groups, which may be justi\ufb01ed in scenarios where insisting\non impact parity would only lead to a reduction in the bene\ufb01cial outcomes received by one or more\ngroups, without necessarily improving them for any other group. In such scenarios, insisting on\nimpact parity can additionally lead to a reduction in the decision accuracy, creating a case of tragedy\nof impact parity with a worse decision making all round, as shown in Figure 1.\nWhile preferred impact is a looser notion of fairness compared to impact parity, by guaranteeing\nthat every group receives at least as many bene\ufb01cial outcomes as they would would have received\nunder impact parity, it retains the core fairness gains in bene\ufb01cial outcomes that discriminated groups\nwould have achieved under the fairness criterion of impact parity.\nFinally, we note that our preference-based fairness notions, while having many attractive properties,\nare not the most suitable notions of fairness in all scenarios. In certain cases, parity fairness may well\nbe the eventual goal [3] and the more desirable notion.\nIn the remainder of this paper, we formalize our preference-based fairness notions in the context\nof binary classi\ufb01cation (Section 2), propose tractable and ef\ufb01cient proxies to include these notions\nin the formulations of convex margin-based classi\ufb01ers in the form of convex-concave constraints\n(Section 3), and show on several real world datasets that our preference-based fairness notions can\nprovide signi\ufb01cant gains in overall decision making accuracy as compared to parity-based fairness\n(Section 4).\n\n2 De\ufb01ning preference-based fairness for classi\ufb01cation\n\nIn this section, we will \ufb01rst introduce two useful quality metrics\u2014utility and group bene\ufb01t\u2014in the\ncontext of fairness in classi\ufb01cation, then revisit parity-based fairness de\ufb01nitions in the light of these\nquality metrics, and \ufb01nally formalize the two preference-based notions of fairness introduced in\nSection 1 from the perspective of the above metrics. For simplicity, we consider binary classi\ufb01cation\ntasks, however, the de\ufb01nitions can be easily extended to m-ary classi\ufb01cation.\nQuality metrics in fair classi\ufb01cation. In a fair (binary) classi\ufb01cation task, one needs to \ufb01nd a\nmapping between the user feature vectors x 2 Rd and class labels y 2 {1, 1}, where (x, y)\nare drawn from an (unknown) distribution f (x, y). This is often achieved by \ufb01nding a mapping\nfunction \u2713 : Rd ! R such that given a feature vector x with an unknown label y, the corresponding\nclassi\ufb01er predicts \u02c6y = sign(\u2713(x)). However, this mapping function also needs to be fair with respect\nto the values of a user sensitive attribute z 2 Z \u2713 Z0 (e.g., sex, race), which are drawn from\nan (unknown) distribution f (z) and may be dependent of the feature vectors and class labels, i.e.,\nf (x, y, z) = f (x, y|z)f (z) 6= f (x, y)f (z).\nGiven the above problem setting, we introduce the following quality metrics, which we will use to\nde\ufb01ne and compare different fairness notions:\nI. Utility (U): overall pro\ufb01t obtained by the decision maker using the classi\ufb01er. For example, in a\nloan approval scenario, the decision maker is the bank that gives the loan and the utility can be\nthe overall accuracy of the classi\ufb01er, i.e.:\n\nU(\u2713) = Ex,y[I{sign(\u2713(x)) = y}],\n\nwhere I(\u00b7) denotes the indicator function and the expectation is taken over the distribution\nf (x, y). It is in the decision maker\u2019s interest to use classi\ufb01ers that maximize utility. Moreover,\ndepending on the scenario, one can attribute different pro\ufb01t to true positives and true negatives\u2014\nor conversely, different cost to false negatives and false positives\u2014while computing utility. For\n\n3\n\n\fexample, in the loan approval scenario, marking an eventual defaulter as non-defaulter may have\na higher cost than marking a non-defaulter as defaulter. For simplicity, in the remainder of the\npaper, we will assume that the pro\ufb01t (cost) for true (false) positives and negatives is the same.\nII. Group bene\ufb01t (Bz): the fraction of bene\ufb01cial outcomes received by users sharing a certain value\nof the sensitive attribute z (e.g., blacks, hispanics, whites). For example, in a loan approval\nscenario, the bene\ufb01cial outcome for a user may be receiving the loan and the group bene\ufb01t for\neach value of z can be de\ufb01ned as:\n\nBz(\u2713) = Ex|z[I{sign(\u2713(x)) = 1}],\n\nwhere the expectation is taken over the conditional distribution f (x|z) and the bank offers a loan\nto a user if sign(\u2713(x)) = 1. Moreover, as suggested by some recent studies in fairness-aware\nlearning [18, 22, 32], the group bene\ufb01ts can also be de\ufb01ned as the fraction of bene\ufb01cial outcomes\nconditional on the true label of the user. For example, in a recidivism prediction scenario, the\ngroup bene\ufb01ts can be de\ufb01ned as the fraction of eventually non-offending defendants sharing a\ncertain sensitive attribute value getting bail, that is:\n\nwhere the expectation is taken over the conditional distribution f (x|z, y = 1), y = 1 indicates\nthat the defendant does not re-offend, and bail is granted if sign(\u2713(x)) = 1.\n\nBz(\u2713) = Ex|z,y=1[I{sign(\u2713(x)) = 1}],\n\nfor all z, z0 2 Z.\n\nfor all z, z0 2 Z.\n\nParity-based fairness. A number of recent studies [7, 14, 18, 21, 32, 33, 34] have considered a\nclassi\ufb01er to be fair if it satis\ufb01es the impact parity criterion. That is, it ensures that the group bene\ufb01ts\nfor all the sensitive attribute values are equal, i.e.:\nBz(\u2713) = Bz0(\u2713)\n\n(1)\nIn this context, different (or often same) de\ufb01nitions of group bene\ufb01t (or bene\ufb01cial outcome) have\nlead to different terminology, e.g., disparate impact [14, 33], indirect discrimination [14, 21], redlin-\ning [7], statistical parity [12, 11, 22, 34], disparate mistreatment [32], or equality of opportunity [18].\nHowever, all of these group bene\ufb01t de\ufb01nitions invariably focus on achieving impact parity. We\npoint interested readers to Feldman et al. [14] and Zafar et al. [32] regarding the discussion on this\nterminology.\nAlthough not always explicitly sought, most of the above studies propose classi\ufb01ers that also satisfy\ntreatment parity in addition to impact parity, i.e., they do not use the sensitive attribute z in the\ndecision making process. However, some of them [7, 18, 21] do not satisfy treatment parity since\nthey resort to group-conditional classi\ufb01ers, i.e., \u2713 = {\u2713z}z2Z. In such case, we can rewrite the above\nparity condition as:\n(2)\nFairness beyond parity. Given the above quality metrics, we can now formalize the two preference-\nbased fairness notions introduced in Section 1.\n\u2014 Preferred treatment: if a classi\ufb01er \u2713 resorts to group-conditional classi\ufb01ers, i.e., \u2713 = {\u2713z}z2Z,\nit is a preferred treatment classi\ufb01er if each group sharing a sensitive attribute value z bene\ufb01ts\nmore from its corresponding group-conditional classi\ufb01er \u2713z than it would bene\ufb01t if it would be\nclassi\ufb01ed by any of the other group-conditional classi\ufb01ers \u2713z0, i.e.,\nfor all z, z0 2 Z.\n\n(3)\nNote that, if a classi\ufb01er \u2713 does not resort to group-conditional classi\ufb01ers, i.e., \u2713z = \u2713 for all\nz 2 Z, it will be always be a preferred treatment classi\ufb01er. If, in addition, such classi\ufb01er ensures\nimpact parity, it is easy to show that its utility cannot be larger than a preferred treatment classi\ufb01er\nconsisting of group-conditional classi\ufb01ers.\n\nBz(\u2713z) = Bz0(\u2713z0)\n\nBz(\u2713z)  Bz(\u2713z0)\n\n\u2014 Preferred impact: a classi\ufb01er \u2713 offers preferred impact over a classi\ufb01er \u27130 ensuring impact\n\nparity if it achieves higher group bene\ufb01t for each sensitive attribute value group, i.e.,\n\nBz(\u2713)  Bz(\u27130)\n\n(4)\nOne can also rewrite the above condition for group-conditional classi\ufb01ers, i.e., \u2713 = {\u2713z}z2Z\nand \u27130 = {\u27130z}z2Z, as follows:\n(5)\nfor all z 2 Z.\nNote that, given any classi\ufb01er \u27130 ensuring impact parity, it is easy to show that there will always\nexist a preferred impact classi\ufb01er \u2713 with equal or higher utility.\n\nBz(\u2713z)  Bz(\u27130z)\n\nfor all z 2 Z.\n\n4\n\n\fConnection to the fair division literature. Our notion of preferred treatment is inspired by the\nconcept of envy-freeness [5, 31] in the fair division literature. Intuitively, an envy-free resource\ndivision ensures that no user would prefer the resources allocated to another user over their own\nallocation. Similarly, our notion of preferred treatment ensures envy-free decision making at the\nlevel of sensitive attribute groups. Speci\ufb01cally, with preferred treatment classi\ufb01cation, no sensitive\nattribute group would prefer the outcomes from the classi\ufb01er of another group.\nOur notion of preferred impact draws inspiration from the two-person bargaining problem [26] in\nthe fair division literature. In a bargaining scenario, given a base resource allocation (also called the\ndisagreement point), two parties try to divide some additional resources between themselves. If the\nparties cannot agree on a division, no party gets the additional resources, and both would only get the\nallocation speci\ufb01ed by the disagreement point. Taking the resources to be the bene\ufb01cial outcomes,\nand the disagreement point to be the allocation speci\ufb01ed by the impact parity classi\ufb01er, a preferred\nimpact classi\ufb01er offers enhanced bene\ufb01ts to all the sensitive attribute groups. Put differently, the\ngroup bene\ufb01ts provided by the preferred impact classi\ufb01er Pareto-dominate the bene\ufb01ts provided by\nthe impact parity classi\ufb01er.\nOn individual-level preferences. Notice that preferred treatment and preferred impact notions are\nde\ufb01ned based on the group preferences, i.e., whether a group as a whole prefers (or, gets more\nbene\ufb01cial outcomes from) a given set of outcomes over another set. It is quite possible that a set\nof outcomes preferred by the group collectively is not preferred by certain individuals in the group.\nConsequently, one can extend our proposed notions to account for individual preferences as well,\ni.e., a set of outcomes is preferred over another if all the individuals in the group prefer it. In the\nremainder of the paper, we focus on preferred treatment and preferred impact in the context of group\npreferences, and leave the case of individual preferences and its implications on the cost of achieving\nfairness for future work.\n\n3 Training preferred classi\ufb01ers\n\nIn this section, our goal is training preferred treatment and preferred impact group-conditional\ni=1, where\nclassi\ufb01ers, i.e., \u2713 = {\u2713z}z2Z, that maximize utility given a training set D = {(xi, yi, zi)}N\n(xi, yi, zi) \u21e0 f (x, y, z). In both cases, we will assume that:2\nI. Each group-conditional classi\ufb01er is a convex boundary-based classi\ufb01er. For ease of exposition,\nin this section, we additionally assume these classi\ufb01ers to be linear, i.e., \u2713z(x) = \u2713T\nz x, where\n\u2713z is a parameter that de\ufb01nes the decision boundary in the feature space. We relax the linearity\nassumption in Appendix A and extend our methodology to a non-linear SVM classi\ufb01er.\n\nII. The utility function U is de\ufb01ned as the overall accuracy of the group-conditional classi\ufb01ers, i.e.,\n(6)\n\nEx,y|z[I{sign(\u2713T\n\nz x) = y}]f (z).\n\nU(\u2713) = Ex,y[I{sign(\u2713(x)) = y}] =Xz2Z\n\nprobability of being classi\ufb01ed into the positive class, i.e.,\n\nIII. The group bene\ufb01t Bz for users sharing the sensitive attribute value z is de\ufb01ned as their average\n(7)\n\nBz(\u2713) = Ex|z[I{sign(\u2713(x)) = 1}] = Ex|z[I{sign(\u2713T\n\nz x) = 1}].\n\nPreferred impact classi\ufb01ers. Given a impact parity classi\ufb01er with decision boundary parameters\n{\u27130z}z2Z, one could think of \ufb01nding the decision boundary parameters {\u2713z}z2Z of a preferred impact\nclassi\ufb01er that maximizes utility by solving the following optimization problem:\n\nminimize\n\n{\u2713z}\n\nNP(x,y,z)2D I{sign(\u2713T\n 1\n\nz x) = y}\n\nz x) = 1} Px2Dz I{sign(\u27130z\n\nsubject to Px2Dz I{sign(\u2713T\nwhere Dz = {(xi, yi, zi) 2 D | zi = z} denotes the set of users in the training set sharing sensitive\nattribute value z, the objective uses an empirical estimate of the utility, de\ufb01ned by Eq. 6, and the\npreferred impact constraints, de\ufb01ned by Eq. 5, use empirical estimates of the group bene\ufb01ts, de\ufb01ned\nby Eq. 7. Here, note that the right hand side of the inequalities does not contain any variables and can\nbe precomputed, i.e., the impact parity classi\ufb01ers {\u27130z}z2Z are given.\n2Exploring the relaxations of these assumptions is a very interesting avenue for future work.\n\nfor all z 2 Z,\n\nT x) = 1}\n\n(8)\n\n5\n\n\fUnfortunately, it is very challenging to solve the above optimization problem since both the objective\nand constraints are nonconvex. To overcome this dif\ufb01culty, we minimize instead a convex loss\nfunction `\u2713(x, y), which is classi\ufb01er dependent [6], and approximate the group bene\ufb01ts using a ramp\n(convex) function r(x) = max(0, x), i.e.,\n\n{\u2713z}\n\nminimize\n\nNP(x,y,z)2D `\u2713z (x, y) +Pz2Z z\u2326(\u2713z)\n 1\nsubject to Px2Dz\n\nz x) Px2Dz\n\nmax(0, \u27130z\n\nmax(0, \u2713T\n\nT x)\n\nfor all z 2 Z,\n\nwhich, for any convex regularizer \u2326(\u00b7), is a disciplined convex-concave program (DCCP) and thus\ncan be ef\ufb01ciently solved using well-known heuristics [30]. For example, if we particularize the above\nformulation to group-conditional (standard) logistic regression classi\ufb01ers \u27130z and \u2713z and L2-norm\nregularizer, then, Eq. 9 adopts the following form:\n\n{\u2713z}\n\nminimize\n\nNP(x,y,z)2D log p(y|x, \u2713z) +Pz2Z z||\u2713z||2\n 1\nsubject to Px2Dz\n\nz x) Px2Dz\n\nmax(0, \u2713T\nz x .\n\nmax(0, \u27130z\n\nT x)\n\nfor all z 2 Z.\n\n1\n\n1+e\u2713T\n\nwhere p(y = 1|x, \u2713z) =\nThe constraints can similarly be added to other convex boundary-based classi\ufb01ers like linear SVM.\nWe further expand on particularizing the constraints for non-linear SVM in Appendix A.\nPreferred treatment classi\ufb01ers. Similarly as in the case of preferred impact classi\ufb01ers, one could\nthink of \ufb01nding the decision boundary parameters {\u2713z}z2Z of a preferred treatment classi\ufb01er that\nmaximizes utility by solving the following optimization problem:\n\nminimize\n\n{\u2713z}\n\nNP(x,y,z)2D I{sign(\u2713T\n 1\n\nz x) = y}\n\nz0x) = 1}\n\nz x) = 1} Px2Dz I{sign(\u2713T\n\nsubject to Px2Dz I{sign(\u2713T\nwhere Dz = {(xi, yi, zi) 2 D | zi = z} denotes the set of users in the training set sharing sensitive\nattribute value z, the objective uses an empirical estimate of the utility, de\ufb01ned by Eq. 6, and the\npreferred treatment constraints, de\ufb01ned by Eq. 3, use empirical estimates of the group bene\ufb01ts, de\ufb01ned\nby Eq. 7. Here, note that both the left and right hand side of the inequalities contain optimization\nvariables.\nHowever, the objective and constraints in the above problem are also nonconvex and thus we adopt a\nsimilar strategy as in the case of preferred impact classi\ufb01ers. More speci\ufb01cally, we solve instead the\nfollowing tractable problem:\n\nfor all z, z0 2 Z,\n\n(11)\n\n(9)\n\n(10)\n\nfor all z, z0 2 Z,\n\n(12)\n\n{\u2713z}\n\nminimize\n\nNP(x,y,z)2D `\u2713z (x, y) +Pz2Z z\u2326(\u2713z)\n 1\nsubject to Px2Dz\n\nz x) Px2Dz\n\nmax(0, \u2713T\n\nmax(0, \u2713T\n\nz0x)\n\nwhich, for any convex regularizer \u2326(\u00b7), is also a disciplined convex-concave program (DCCP) and\ncan be ef\ufb01ciently solved.\n4 Evaluation\nIn this section, we compare the performance of preferred treatment and preferred impact classi\ufb01ers\nagainst unconstrained, treatment parity and impact parity classi\ufb01ers on a variety of synthetic and\nreal-world datasets. More speci\ufb01cally, we consider the following classi\ufb01ers, which we train to\nmaximize utility subject to the corresponding constraints:\n\u2014 Uncons: an unconstrained classi\ufb01er that resorts to group-conditional classi\ufb01ers. It violates\ntreatment parity\u2014it trains a separate classi\ufb01er per sensitive attribute value group\u2014and potentially\nviolates impact parity\u2014it may lead to different bene\ufb01ts for different groups.\n\n\u2014 Parity: a parity classi\ufb01er that does not use the sensitive attribute group information in the decision\nmaking, but only during the training phase, and is constrained to satisfy both treatment parity\u2014\nits decisions do not change based on the users\u2019 sensitive attribute value as it does not resort to\ngroup-conditional classi\ufb01ers\u2014and impact parity\u2014it ensures that the bene\ufb01ts for all groups are\nthe same. We train this classi\ufb01er using the methodology proposed by Zafar et al. [33].\n\n\u2014 Preferred treatment: a classi\ufb01er that resorts to group-conditional classi\ufb01ers and is constrained\nto satisfy preferred treatment\u2014each group gets the highest bene\ufb01t with its own classi\ufb01er than\nany other group\u2019s classi\ufb01er.\n\n6\n\n\f(a) Uncons\n\n(b) Parity\n\n(c) Preferred impact\n\n(d) Preferred both\n\nFigure 2: [Synthetic data] Crosses denote group-0 (points with z = 0) and circles denote group-1.\nGreen points belong to the positive class in the training data whereas red points belong to the negative\nclass. Each panel shows the accuracy of the decision making scenario along with group bene\ufb01ts (B0\nand B1) provided by each of the classi\ufb01ers involved. For group-conditional classi\ufb01ers, cyan (blue)\nline denotes the decision boundary for the classi\ufb01er of group-0 (group-1). Parity case (panel (b))\nconsists of just one classi\ufb01er for both groups in order to meet the treatment parity criterion.\n\n\u2014 Preferred impact: a classi\ufb01er that resorts to group-conditional classi\ufb01ers and is constrained to\n\nbe preferred over the Parity classi\ufb01er.\n\n\u2014 Preferred both: a classi\ufb01er that resort to group-conditional classi\ufb01ers and is constrained to satisfy\n\nboth preferred treatment and preferred impact.\n\nFor the experiments in this section, we use logistic regression classi\ufb01ers with L2-norm regularization.\nWe randomly split the corresponding dataset into 70%-30% train-test folds 5 times, and report the\naverage accuracy and group bene\ufb01ts in the test folds. Appendix B describes the details for selecting\nthe optimal L2-norm regularization parameters. Here, we compute utility (U) as the overall accuracy\nof a classi\ufb01er and group bene\ufb01ts (Bz) as the fraction of users sharing sensitive attribute z that are\nclassi\ufb01ed into the positive class. Moreover, the sensitive attribute is always binary, i.e., z 2 {0, 1}.\n4.1 Experiments on synthetic data\nExperimental setup. Following Zafar et al. [33], we generate a synthetic dataset in which the uncon-\nstrained classi\ufb01er (Uncons) offers different bene\ufb01ts to each sensitive attribute group. In particular, we\ngenerate 20,000 binary class labels y 2 {1, 1} uniformly at random along with their corresponding\ntwo-dimensional feature vectors sampled from the following Gaussian distributions: p(x|y = 1) =\nN ([2; 2], [5, 1; 1, 5]) and p(x|y = 1) = N ([2;2], [10, 1; 1, 3]). Then, we generate each sensi-\ntive attribute from the Bernoulli distribution p(z = 1) = p(x0|y = 1)/(p(x0|y = 1)+p(x0|y = 1)),\nwhere x0 is a rotated version of x, i.e., x0 = [cos(\u21e1/8), sin(\u21e1/8); sin(\u21e1/8), cos(\u21e1/8)]. Finally,\nwe train the \ufb01ve classi\ufb01ers described above and compute their overall (test) accuracy and (test) group\nbene\ufb01ts.\nResults. Figure 2 shows the trained classi\ufb01ers, along with their overall accuracy and group bene\ufb01ts.\nWe can make several interesting observations:\nThe Uncons classi\ufb01er leads to an accuracy of 0.87, however, the group-conditional boundaries and\nhigh disparity in treatment for the two groups (0.16 vs. 0.85) mean that it satis\ufb01es neither treatment\nparity nor impact parity. Moreover, it leads to only a small violation of preferred treatment\u2014bene\ufb01ts\nfor group-0 would increase slightly from 0.16 to 0.20 by adopting the classi\ufb01er of group-1. However,\nthis will not always be the case, as we will later show in the experiments on real data.\nThe Parity classi\ufb01er satis\ufb01es both treatment and impact parity, however, it does so at a large cost in\nterms of accuracy, which drops from 0.87 for Uncons to 0.57 for Parity.\nThe Preferred treatment classi\ufb01er (not shown in the \ufb01gure), leads to a minor change in decision\nboundaries as compared to the Uncons classi\ufb01er to achieve preferred treatment. Bene\ufb01ts for group-0\n(group-1) with its own classi\ufb01er are 0.20 (0.84) as compared to 0.17 (0.83) while using the classi\ufb01er\nof group-1 (group-0). The accuracy of this classi\ufb01er is 0.87.\nThe Preferred impact classi\ufb01er, by making use of a looser notion of fairness compared to impact\nparity, provides higher bene\ufb01ts for both groups at a much smaller cost in terms of accuracy than the\nParity classi\ufb01er (0.76 vs. 0.57). Note that, while the Parity classi\ufb01er achieved equality in bene\ufb01ts by\nmisclassifying negative examples from group-0 into the positive class and misclassifying positive\n\n7\n\nAcc:0.87B0:0.16;B1:0.77B0:0.20;B1:0.85Acc:0.57B0:0.51;B1:0.49Acc:0.76B0:0.58;B1:0.96B0:0.21;B1:0.86Acc:0.73B0:0.58;B1:0.96B0:0.43;B1:0.97\fB0(\u03b80)\n\nB0(\u03b81)\n\nB1(\u03b81)\n\nB1(\u03b80)\nProPublica COMPAS dataset\n\nAcc\n\nUncons.\n\nParity\n\nPrf-treat.\n\nPrf-imp.\n\nPrf-both\n\nAdult dataset\n\nUncons.\n\nParity\n\nPrf-treat.\n\nPrf-imp.\n\nPrf-both\n\nNYPD SQF dataset\n\nUncons.\n\nParity\n\nPrf-treat.\n\nPrf-imp.\n\nPrf-both\n\ns\nt\ni\nf\ne\nn\ne\nB\n\ns\nt\ni\nf\ne\nn\ne\nB\n\ns\nt\ni\nf\ne\nn\ne\nB\n\n 1\n 0.8\n 0.6\n 0.4\n\n 0.4\n\n 0.2\n\n 0\n\n 1\n 0.8\n 0.6\n 0.4\n 0.2\n 0\n\n 0.7\n 0.6\n 0.5\n 0.4\n\ny\nc\na\nr\nu\nc\nc\nA\n\n 0.85\n 0.84\n 0.83\n 0.82\n 0.81\n\ny\nc\na\nr\nu\nc\nc\nA\n\n 0.8\n 0.7\n 0.6\n 0.5\n\ny\nc\na\nr\nu\nc\nc\nA\n\nFigure 3: The \ufb01gure shows the accuracy and bene\ufb01ts received by the two groups for various decision\nmaking scenarios. \u2018Prf-treat.\u2019, \u2018Prf-imp.\u2019, and \u2018Prf-both\u2019 respectively correspond to the classi\ufb01ers\nsatisfying preferred treatment, preferred impact, and both preferred treatment and impact criteria.\nSensitive attribute values 0 and 1 denote blacks and whites in ProPublica COMPAS dataset and\nNYPD SQF datasets, and women and men in the Adult dataset. Bi(\u2713j) denotes the bene\ufb01ts obtained\nby group i when using the classi\ufb01er of group j. For the Parity case, we train just one classi\ufb01er for\nboth the groups, so the bene\ufb01ts do not change by adopting other group\u2019s classi\ufb01er.\n\nexamples from group-1 into the negative class, the Preferred impact classi\ufb01er only incurs the former\ntype of misclassi\ufb01cations. However, the outcomes of the Preferred impact classi\ufb01er do not satisfy the\npreferred treatment criterion: group-1 would attain higher bene\ufb01t if it used the classi\ufb01er of group-0\n(0.96 as compared to 0.86).\nFinally, the classi\ufb01er that satis\ufb01es preferred treatment and preferred impact (Preferred both) achieves\nan accuracy and bene\ufb01ts at par with the Preferred impact classi\ufb01er.\nWe present the results of applying our fairness constraints on a non linearly-separable dataset with a\nSVM classi\ufb01er with a radial basis function (RBF) kernel in Appendix C.\n4.2 Experiments on real data\nDataset description and experimental setup. We experiment with three real-world datasets: the\nCOMPAS recidivism prediction dataset compiled by ProPublica [23], the Adult income dataset from\nUCI machine learning repository [2], and the New York Police Department (NYPD) Stop-question-\nand-frisk (SQF) dataset made publicly available by NYPD [1]. These datasets have been used by a\nnumber of prior studies in the fairness-aware machine learning literature [14, 29, 32, 34, 33].\nIn the COMPAS dataset, the classi\ufb01cation task is to predict whether a criminal defendant would\nrecidivate within two years (negative class) or not (positive class); in the Adult dataset, the task\nis to predict whether a person earns more than 50K USD per year (positive class) or not; and, in\nthe SQF dataset, the task is to predict whether a pedestrian should be stopped on the suspicion\nof having an illegal weapon or not (positive class). In all datasets, we assume being classi\ufb01ed as\npositive to be the bene\ufb01cial outcome. Additionally, we divide the subjects in each dataset into two\nsensitive attribute value groups: women (group-0) and men (group-1) in the Adult dataset and blacks\n(group-0) and whites (group-1) in the COMPAS and SQF datasets. The supplementary material\n\n8\n\n\f(Appendix D) contains more information on the sensitive and the non-sensitive features as well as the\nclass distributions.3\nResults. Figure 3 shows the accuracy achieved by the \ufb01ve classi\ufb01ers described above along with the\nbene\ufb01ts they provide for the three datasets. We can draw several interesting observations:4\nIn all cases, the Uncons classi\ufb01er, in addition to violating treatment parity (a separate classi\ufb01er for\neach group) and impact parity (high disparity in group bene\ufb01ts), also violates the preferred treatment\ncriterion (in all cases, at least one of group-0 or group-1 would bene\ufb01t more by adopting the other\ngroup\u2019s classi\ufb01er). On the other hand, the Parity classi\ufb01er satis\ufb01es the treatment parity and impact\nparity but it does so at a large cost in terms of accuracy.\nThe Preferred treatment classi\ufb01er provides a much higher accuracy than the Parity classi\ufb01er\u2014its\naccuracy is at par with that of the Uncons classi\ufb01er\u2014while satisfying the preferred treatment criterion.\nHowever, it does not meet the preferred impact criterion. The Preferred impact classi\ufb01er meets the\npreferred impact criterion but does not always satisfy preferred treatment. Moreover, it also leads to a\nbetter accuracy then Parity classi\ufb01er in all cases. However, the gain in accuracy is more substantial\nfor the SQF datasets as compared to the COMPAS and Adult dataset.\nThe classi\ufb01er satisfying preferred treatment and preferred impact (Preferred both) has a somewhat\nunderwhelming performance in terms of accuracy for the Adult dataset. While the performance of\nthis classi\ufb01er is better than the Parity classi\ufb01er in the COMPAS dataset and NYPD SQF dataset, it is\nslightly worse for the Adult dataset.\nIn summary, the above results show that ensuring either preferred treatment or preferred impact is\nless costly in terms of accuracy loss than ensuring parity-based fairness, however, ensuring both\npreferred treatment and preferred impact can lead to comparatively larger accuracy loss in certain\ndatasets. We hypothesize that this loss in accuracy may be partly due to splitting the number of\navailable samples into groups during training\u2014each group-conditional classi\ufb01er use only samples\nfrom the corresponding sensitive attribute group\u2014hence decreasing the effectiveness of empirical\nrisk minimization.\n5 Conclusion\nIn this paper, we introduced two preference-based notions of fairness\u2014preferred treatment and\npreferred impact\u2014establishing a previously unexplored connection between fairness-aware machine\nlearning and the economics and game theoretic concepts of envy-freeness and bargaining. Then,\nwe proposed tractable proxies to design boundary-based classi\ufb01ers satisfying these fairness notions\nand experimented with a variety of synthetic and real-world datasets, showing that preference-based\nfairness often allows for greater decision accuracy than existing parity-based fairness notions.\nOur work opens many promising venues for future work. For example, our methodology is limited\nto convex boundary-based classi\ufb01ers. A natural follow up would be to extend our methodology to\nother types of classi\ufb01ers, e.g., neural networks and decision trees. In this work, we de\ufb01ned preferred\ntreatment and preferred impact in the context of group preferences, however, it would be worth\nrevisiting the proposed de\ufb01nitions in the context of individual preferences. The fair division literature\nestablishes a variety of fairness axioms [26] such as Pareto-optimality and scale invariance. It would\nbe interesting to study such axioms in the context of fairness-aware machine learning.\nFinally, we note that while moving from parity to preference-based fairness offers many attractive\nproperties, we acknowledge it may not always be the most appropriate notion, e.g., in some scenarios,\nparity-based fairness may very well present the eventual goal and be more desirable [3].\n\nAcknowledgments\nAW acknowledges support by the Alan Turing Institute under EPSRC grant EP/N510129/1, and by\nthe Leverhulme Trust via the CFI.\n\n3Since the SQF dataset is highly skewed in terms of class distribution (\u21e097% points in the positive class) resulting in a trained classi\ufb01er\npredicting all points as positive (yet having 97% accuracy), we subsample the dataset to have equal class distribution. Another option would be\nusing penalties proportional to the size of the class, but we observe that an unconstrained classi\ufb01er with class penalties gives similar predictions\nas compared to a balanced dataset. We decided to experiment with the balanced dataset since the accuracy drops in this dataset are easier to\ninterpret.\n\n4The unfairness in the SQF dataset is different from what one would expect [27]\u2014an unconstrained classi\ufb01er gives more bene\ufb01ts to blacks\nas compared to whites. This is due to the fact that a larger fraction of stopped whites were found to be in possession on an illegal weapon\n(Tables 3 and 4 in Appendix D).\n\n9\n\n\fReferences\n[1] Stop, Question and Frisk Data. http://www1.nyc.gov/site/nypd/stats/reports-analysis/stopfrisk.page, 2017.\n\n[2] Adult data. https://archive.ics.uci.edu/ml/datasets/adult, 1996.\n\n[3] A. Altman. Discrimination. In The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab,\n\nStanford University, 2016. https://plato.stanford.edu/archives/win2016/entries/discrimination/.\n\n[4] S. Barocas and A. D. Selbst. Big Data\u2019s Disparate Impact. California Law Review, 2016.\n\n[5] M. Berliant and W. Thomson. On the Fair Division of a Heterogeneous Commodity. Journal of Mathematics\n\nEconomics , 1992.\n\n[6] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.\n\n[7] T. Calders and S. Verwer. Three Naive Bayes Approaches for Discrimination-Free Classi\ufb01cation. Data\n\nMining and Knowledge Discovery, 2010.\n\n[8] O. Chapelle. Training a Support Vector Machine in the Primal. Neural Computation, 2007.\n\n[9] A. Chouldechova. Fair Prediction with Disparate Impact:A Study of Bias in Recidivism Prediction\n\nInstruments. arXiv preprint, arXiv:1610.07524, 2016.\n\n[10] Civil Rights Act. Civil Rights Act of 1964, Title VII, Equal Employment Opportunities, 1964.\n\n[11] S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, and A. Huq. Algorithmic Decision Making and the Cost\n\nof Fairness. In KDD, 2017.\n\n[12] C. Dwork, M. Hardt, T. Pitassi, and O. Reingold. Fairness Through Awareness. In ITCSC, 2012.\n\n[13] C. Dwork, N. Immorlica, A. T. Kalai, and M. Leiserson. Decoupled Classi\ufb01ers for Fair and Ef\ufb01cient\n\nMachine Learning. arXiv preprint arXiv:1707.06613, 2017.\n\n[14] M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian. Certifying and\n\nRemoving Disparate Impact. In KDD, 2015.\n\n[15] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian. On the (im)possibility of Fairness. arXiv\n\npreprint arXiv:1609.07236, 2016.\n\n[16] J. E. Gentle, W. K. H\u00e4rdle, and Y. Mori. Handbook of Computational Statistics: Concepts and Methods.\n\nSpringer Science & Business Media, 2012.\n\n[17] G. Goh, A. Cotter, M. Gupta, and M. Friedlander. Satisfying Real-world Goals with Dataset Constraints.\n\nIn NIPS, 2016.\n\n[18] M. Hardt, E. Price, and N. Srebro. Equality of Opportunity in Supervised Learning. In NIPS, 2016.\n\n[19] M. Joseph, M. Kearns, J. Morgenstern, and A. Roth. Fairness in Learning: Classic and Contextual Bandits.\n\nIn NIPS, 2016.\n\n[20] F. Kamiran and T. Calders. Classi\ufb01cation with No Discrimination by Preferential Sampling. In BENE-\n\nLEARN, 2010.\n\n[21] T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma. Fairness-aware Classi\ufb01er with Prejudice Remover\n\nRegularizer. In PADM, 2011.\n\n[22] J. Kleinberg, S. Mullainathan, and M. Raghavan. Inherent Trade-Offs in the Fair Determination of Risk\n\nScores. In ITCS, 2017.\n\n[23] J. Larson, S. Mattu, L. Kirchner, and J. Angwin. https://github.com/propublica/compas-analysis, 2016.\n\n[24] B. T. Luong, S. Ruggieri, and F. Turini. kNN as an Implementation of Situation Testing for Discrimination\n\nDiscovery and Prevention. In KDD, 2011.\n\n[25] C. Mu\u00f1oz, M. Smith, and D. Patil. Big Data: A Report on Algorithmic Systems, Opportunity, and Civil\n\nRights. Executive Of\ufb01ce of the President. The White House., 2016.\n\n[26] J. F. Nash Jr. The Bargaining Problem. Econometrica: Journal of the Econometric Society, 1950.\n\n[27] NYCLU. Stop-and-Frisk Data. https://www.nyclu.org/en/stop-and-frisk-data, 2017.\n\n10\n\n\f[28] D. Pedreschi, S. Ruggieri, and F. Turini. Discrimination-aware Data Mining. In KDD, 2008.\n\n[29] R. S. Sharad Goel, Justin M. Rao. Precinct or Prejudice? Understanding Racial Disparities in New York\n\nCity\u2019s Stop-and-Frisk Policy. Annals of Applied Statistics, 2015.\n\n[30] X. Shen, S. Diamond, Y. Gu, and S. Boyd. Disciplined Convex-Concave Programming. arXiv:1604.02639,\n\n2016.\n\n[31] H. R. Varian. Equity, Envy, and Ef\ufb01ciency. Journal of Economic Theory, 1974.\n\n[32] M. B. Zafar, I. Valera, M. G. Rodriguez, and K. P. Gummadi. Fairness Beyond Disparate Treatment &\n\nDisparate Impact: Learning Classi\ufb01cation without Disparate Mistreatment. In WWW, 2017.\n\n[33] M. B. Zafar, I. Valera, M. G. Rodriguez, and K. P. Gummadi. Fairness Constraints: Mechanisms for Fair\n\nClassi\ufb01cation. In AISTATS, 2017.\n\n[34] R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning Fair Representations. In ICML, 2013.\n\n11\n\n\f", "award": [], "sourceid": 175, "authors": [{"given_name": "Muhammad Bilal", "family_name": "Zafar", "institution": "MPI-SWS"}, {"given_name": "Isabel", "family_name": "Valera", "institution": "MPI for Intelligent Systems"}, {"given_name": "Manuel", "family_name": "Rodriguez", "institution": "MPI SWS"}, {"given_name": "Krishna", "family_name": "Gummadi", "institution": "Max Planck Institute for Software Systems"}, {"given_name": "Adrian", "family_name": "Weller", "institution": "University of Cambridge"}]}