{"title": "Exploring Algorithmic Fairness in Robust Graph Covering Problems", "book": "Advances in Neural Information Processing Systems", "page_first": 15776, "page_last": 15787, "abstract": "Fueled by algorithmic advances, AI algorithms are increasingly being deployed in settings subject to unanticipated challenges with complex social effects. Motivated by real-world deployment of AI driven, social-network based suicide prevention and landslide risk management interventions, this paper focuses on a robust graph covering problem subject to group fairness constraints. We show that, in the absence of fairness constraints, state-of-the-art algorithms for the robust graph covering problem result in biased node coverage: they tend to discriminate individuals (nodes) based on membership in traditionally marginalized groups. To remediate this issue, we propose a novel formulation of the robust covering problem with fairness constraints and a tractable approximation scheme applicable to real world instances. We provide a formal analysis of the price of group fairness (PoF) for this problem, where we show that uncertainty can lead to greater PoF. We demonstrate the effectiveness of our approach on several real-world social networks. Our method yields competitive node coverage while significantly improving group fairness relative to state-of-the-art methods.", "full_text": "Exploring Algorithmic Fairness in\nRobust Graph Covering Problems\n\nAida Rahmattalabi \u21e4\nrahmatta@usc.edu\n\nPhebe Vayanos \u21e4\n\nphebe.vayanos@usc.edu\n\nAnthony Fulginiti \u2020\n\nanthony.fulginiti@du.edu\n\nEric Rice \u21e4\n\nericr@usc.edu\n\nBryan Wilder \u2021\n\nbwilder@g.harvard.edu\n\nAmulya Yadav \u00a7\namulya@psu.edu\n\nMilind Tambe \u2021\n\nmilind_tambe@harvard.edu\n\nAbstract\n\nFueled by algorithmic advances, AI algorithms are increasingly being deployed in\nsettings subject to unanticipated challenges with complex social effects. Motivated\nby real-world deployment of AI driven, social-network based suicide prevention\nand landslide risk management interventions, this paper focuses on robust graph\ncovering problems subject to group fairness constraints. We show that, in the\nabsence of fairness constraints, state-of-the-art algorithms for the robust graph cov-\nering problem result in biased node coverage: they tend to discriminate individuals\n(nodes) based on membership in traditionally marginalized groups. To mitigate\nthis issue, we propose a novel formulation of the robust graph covering problem\nwith group fairness constraints and a tractable approximation scheme applicable to\nreal-world instances. We provide a formal analysis of the price of group fairness\n(PoF) for this problem, where we show that uncertainty can lead to greater PoF. We\ndemonstrate the effectiveness of our approach on several real-world social networks.\nOur method yields competitive node coverage while signi\ufb01cantly improving group\nfairness relative to state-of-the-art methods.\n\n1\n\nIntroduction\n\nMotivation. This paper considers the problem of selecting a subset of nodes (which we refer\nto as \u2018monitors\u2019) in a graph that can \u2018cover\u2019 their adjacent nodes. We are mainly motivated by\nsettings where monitors are subject to failure and we seek to maximize worst-case node coverage.\nWe refer to this problem as the robust graph covering. This problem \ufb01nds applications in several\ncritical real-world domains, especially in the context of optimizing social interventions on vulnerable\npopulations. Consider for example the problem of designing Gatekeeper training interventions for\nsuicide prevention, wherein a small number of individuals can be trained to identify warning signs of\nsuicide among their peers [32]. A similar problem arises in the context of disaster risk management in\nremote communities wherein a moderate number of individuals are recruited in advance and trained to\nwatch out for others in case of natural hazards (e.g., in the event of a landslide [40]). Previous research\nhas shown that social intervention programs of this sort hold great promise [32, 40]. Unfortunately,\n\n\u21e4University of Southern California\n\u2020University of Denver\n\u2021Harvard University\n\u00a7Pennsylvania State University\n\n33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.\n\n\fNetwork Name Network Size Worst-case coverage of individuals by racial group (%)\n\nWhite Black Hispanic Mixed\n\nSPY1\nSPY2\nSPY3\nMFP1\nMFP2\n\n95\n117\n118\n165\n182\n\n70\n78\n88\n96\n44\n\n36\n\u2013\n\u2013\n77\n85\n\n\u2013\n42\n33\n69\n70\n\n86\n76\n95\n73\n77\n\nOther\n94\n67\n69\n28\n72\n\nTable 1: Racial discrimination in node coverage resulting from applying the algorithm in [45] on\nreal-world social networks from two homeless drop-in centers in Los Angeles, CA [4], when 1/3 of\nnodes (individuals) can be selected as monitors, out of which at most 10% will fail. The numbers\ncorrespond to the worst-case percentage of covered nodes across all monitor availability scenarios.\n\nin these real-world domains, intervention agencies often have very limited resources, e.g., moderate\nnumber of social workers to conduct the intervention, small amount of funding to cover the cost of\ntraining. This makes it essential to target the right set of monitors to cover a maximum number of\nnodes in the network. Further, in these interventions, the performance and availability of individuals\n(monitors) is unknown and unpredictable. At the same time, robustness is desired to guarantee high\ncoverage even in worst-case settings to make the approach suitable for deployment in the open world.\nRobust graph covering problems similar to the one we consider here have been studied in the literature,\nsee e.g., [19, 45]. Yet, a major consideration distinguishes our problem from previous work: namely,\nthe need for fairness. Indeed, when deploying interventions in the open world (especially in sensitive\ndomains impacting life and death like the ones that motivate this work), care must be taken to ensure\nthat algorithms do not discriminate among people with respect to protected characteristics such as\nrace, ethnicity, disability, etc. In other words, we need to ensure that independently of their group,\nindividuals have a high chance of being covered, a notion we refer to as group fairness.\nTo motivate our approach, consider deploying in the open world a state-of-the art algorithm for\nrobust graph covering (which does not incorporate fairness considerations). Speci\ufb01cally, we apply\nthe solutions provided by the algorithm from [45] on \ufb01ve real-world social networks. The results\nare summarized in Table 1 where, for each network, we report its size and the worst-case coverage\nby racial group. In all instances, there is signi\ufb01cant disparity in coverage across racial groups. As\nan example, in network SPY1 36% of Black individuals are covered in the worst-case compared to\n70% (resp. 86%) of White (resp. Mixed race) individuals. Thus, when maximizing coverage without\nfairness, (near-)optimal interventions end up mirroring any differences in degree of connectedness of\ndifferent groups. In particular, well-connected groups at the center of the network are more likely to\nbe covered (protected). Motivated by the desire to support those that are the less well off, we employ\nideas from maximin fairness to improve coverage of those groups that are least likely to be protected.\nProposed Approach and Contributions. We investigate the robust graph covering problem with\nfairness constraints. Formally, given a social network, where each node belongs to a group, we\nconsider the problem of selecting a subset of I nodes (monitors), when at most J of them may fail.\nWhen a node is chosen as a monitor and does not fail, all of its neighbors are said to be \u2018covered\u2019\nand we use the term \u2018coverage\u2019 to refer to the total number of covered nodes. Our objective is to\nmaximize worst-case coverage when any J nodes may fail, while ensuring fairness in coverage across\ngroups. We adopt maximin fairness from the Rawlsian theory of justice [41] as our fairness criterion:\nwe aim to maximize the utility of the groups that are worse-off. To the best of our knowledge, ours is\nthe \ufb01rst paper enforcing fairness constraints in the context of graph covering subject to node failure.\nWe make the following contributions: (i) We achieve maximin group fairness by incorporating\nconstraints inside a robust optimization model, wherein we require that at least a fraction W of\neach group is covered, in the worst-case; (ii) We propose a novel two-stage robust optimization\nformulation of the problem for which near-optimal conservative approximations can be obtained as a\nmoderately-sized mixed-integer linear program (MILP). By leveraging the decomposable structure\nof the resulting MILP, we propose a Benders\u2019 decomposition algorithm augmented with symmetry\nbreaking to solve practical problem sizes; (iii) We present the \ufb01rst study of price of group fairness\n(PoF), i.e., the loss in coverage due to fairness constraints in the graph covering problem subject to\nnode failure. We provide upper bounds on the PoF for Stochastic Block Model networks, a widely\n\n2\n\n\fstudied model of networks with community structure; (iv) Finally, we demonstrate the effectiveness of\nour approach on several real-world social networks of homeless youth. Our method yields competitive\nnode coverage while signi\ufb01cantly improving group fairness relative to state-of-the-art methods.\nRelated Work. Our paper relates to three streams of literature which we review.\nAlgorithmic Fairness. With increase in deployments of AI, OR, and ML algorithms for decision and\npolicy-making in the open world has come increased interest in algorithmic fairness. A large portion\nof this literature is focused on resource allocation systems, see e.g., [13, 33, 50]. Group fairness in\nparticular has been studied in the context of resource allocation problems [22, 42, 43]. A nascent\nstream of work proposes to impose fairness by means of constraints in an optimization problem, an\napproach we also follow. This is for example proposed in [1], and in [8, 24], and in [2] for machine\nlearning, resource allocation, and matching problems, respectively. Several authors have studied the\nprice of fairness. In [13], the authors provide bounds for maximin fair optimization problems. Their\napproach is restricted to convex and compact utility sets. In [6], the authors study price of fairness for\nindivisible goods with additive utility functions. In our graph covering problem, this property does not\nhold. Several authors have investigated notions of fairness under uncertainty, see e.g, [5, 28, 36, 50].\nThese papers all assume full distributional information about the uncertain parameters and cannot\nbe employed in our setting where limited data is available about node availability. Motivated by\ndata scarcity, we take a robust optimization approach to model uncertainty which does not require\ndistributional information. This problem is highly intractable due to the combinatorial nature of\nboth the decision and uncertainty spaces. When fair solutions are hard to compute, \u201capproximately\nfair\u201d solutions have been considered [33]. In our work, we adopt an approximation scheme. As\nsuch, our approach falls under the \u201capproximately fair\u201d category. Recently, several authors have\nemphasized the importance of fairness when conducting interventions in socially sensitive settings,\nsee e.g., [3, 34, 44]. Our work most closely relates to [44], wherein the authors propose an algorithmic\nframework for fair in\ufb02uence maximization. We note that, in their work, nodes are not subject to\nfailure and therefore their approach does not apply in our context.\nSubmodular Optimization. One can view the group-fair maximum coverage problem as a multi-\nobjective optimization problem, with the coverage of each community being a separate objective. In\nthe deterministic case, this problem reduces to the multi-objective submodular optimization prob-\nlem [21], as coverage has the submodularity (diminishing returns) property. In addition, moderately\nsized problems of this kind can be solved optimally using integer programming technology. How-\never, when considering uncertainty in node performance/availability, the objective function loses the\nsubmodularity property while exact techniques fail to scale to even moderate problem sizes. Thus,\nexisting (exact or approximate) approaches do not apply. Our work more closely relates to the robust\nsubmodular optimization literature. In [19, 37], the authors study the problem of choosing a set of\nup to I items, out of which J fail (which encompasses as a special case the robust graph covering\nproblem without fairness constraints). They propose a greedy algorithm with a constant (0.387)\napproximation factor, valid for J = o(pI), and J = o(I), respectively. Finally, in [45], the authors\npropose another greedy algorithm with a general bound based on the curvature of the submodular\nfunction. These heuristics, although computationally ef\ufb01cient, are coverage-centered and do not take\nfairness into consideration. Thus, they may lead to discriminatory outcomes, see Table 1.\nRobust Optimization. Our solution approach closely relates to the robust optimization paradigm which\nis a computationally attractive framework for obtaining equivalent or conservative approximations\nbased on duality theory, see e.g., [7, 10, 49]. Indeed, we show that the robust graph covering problem\ncan be written as a two-stage robust problem with binary second-stage decisions which is highly\nintractable in general [14]. One stream of work proposes to restrict the functional form of the recourse\ndecisions to functions of benign complexity [12, 15]. Other works rely on partitioning the uncertainty\nset into \ufb01nite sets and applying constant decision rules on each partition [15, 17, 31, 38, 47]. The last\nstream of work investigates the so-called K-adaptability counterpart [11, 20, 31, 39, 46], in which\nK candidate policies are chosen in the \ufb01rst stage and the best of these policies is selected after the\nuncertain parameters are revealed. Our paper most closely relates to [31, 39]. In [31], the authors\nshow that for bounded polyhedral uncertainty sets, linear two-stage robust optimization problems can\nbe approximately reformulated as MILPs. Paper [39] extends this result to a special case of discrete\nuncertainty sets. We prove that we can leverage this approximation to reformulate robust graph\ncovering problem with fairness constraints exactly for a much larger class of discrete uncertainty sets.\n\n3\n\n\f2 Fair and Robust Graph Covering Problem\n\nWe model a social network as a directed graph G = (N ,E), in which N := {1, . . . , N} is the set of\nall nodes (individuals) and E is the set of all edges (social ties). A directed edge from \u232b to n exists,\ni.e., (\u232b, n) 2E , if node n can be covered by \u232b. We use (n) := {\u232b 2N : (\u232b, n) 2E} to denote the\nset of neighbors (friends) of n in G, i.e., the set of nodes that can cover node n. Each node n 2N\nis characterized by a set of attributes (protected characteristics) such as age, race, gender, etc., for\nwhich fair treatment is important. Based on these node characteristics, we partition N into C disjoint\ngroups Nc, c 2C := {1, . . . , C}, such that [c2CNc = N .\nWe consider the problem of selecting a set of I nodes from N to act as \u2018peer-monitors\u2019 for their\nneighbors, given that the availability of each node is unknown a-priori and at most J nodes may fail\n(be unavailable). We encode the choice of monitors using a binary vector x of dimension N whose nth\nelement is one iff the nth node is chosen. We require x 2X := {x 2{ 0, 1}N : e>x \uf8ff I}, where e is\na vector of all ones of appropriate dimension. Accordingly, we encode the (uncertain) node availability\nusing a binary vector \u21e0 of dimension N whose nth element equals one iff node n does not fail (is\navailable). Given that data available to inform the distribution of \u21e0 is typically scarce, we avoid making\ndistributional assumptions on \u21e0. Instead, we view uncertainty as deterministic and set based, in the\nspirit of robust optimization [7]. Thus, we assume that \u21e0 can take-on any value from the set \u2305 which is\noften referred to as the uncertainty set in robust optimization. The set \u2305 may for example conveniently\ncapture failure rate information. Thus, we require \u21e0 2 \u2305:= {\u21e0 2{ 0, 1}N : e>(e  \u21e0) \uf8ff J}. A\nnode n is counted as \u2018covered\u2019 if at least one of its neighbors is a monitor and does not fail (is\navailable). We let yn(x, \u21e0) denote if n is covered for the monitor choice x and node availability \u21e0.\n\nyn(x, \u21e0) := I\u21e3P\u232b2(n) \u21e0\u232bx\u232b  1\u2318 .\n\nThe coverage is then expressible as FG(x, \u21e0) := e>y(x, \u21e0). The robust covering problem which\naims to maximize the worst-case (minimum) coverage under node failures can be written as\n\nmax\nx2X\n\nmin\n\u21e02\u2305\n\nFG(x, \u21e0).\n\n(RC)\n\nProblem (RC) ignores fairness and may result in discriminatory coverage with respect to (protected)\nnode attributes , see Table 1. We thus propose to augment the robust covering problem with fairness\nconstraints. Speci\ufb01cally, we propose to achieve max-min fairness by imposing fairness constraints on\neach group\u2019s coverage: we require that at least a fraction W of nodes from each group be covered.\nIn [44], the authors show that by conducting a binary search for the largest W for which fairness\nconstraints are satis\ufb01ed for all groups, the max-min fairness optimization problem is equivalent to the\none with fairness constraints. Thus, we write the robust covering problem with fairness constraints as\n\nmin\n\nx2X\n\n(max\n\nFG,c(x, \u21e0) : FG,c(x, \u21e0)  W|Nc|8 c 2C , 8\u21e0 2 \u2305) ,\n\u21e02\u2305 Xc2C\n(RCfair)\nwhere FG,c(x, \u21e0) :=Pn2Nc\nyn(x, \u21e0) is the coverage of group c 2C . Note that if |C| = 1, Prob-\nlem (RCfair) reduces to Problem (RC), and if \u2305= {e}, Problem (RCfair) reduces to the deterministic\ncovering problem with fairness constraints. We emphasize that our approach can handle fairness with\nrespect to more than one protected attribute by either: (a) partitioning the network based on joint\nvalues of the protected attributes and imposing a max-min fairness constraint for each group; or (b)\nimposing max-min fairness constraints for each protected attribute separately. Problem (RCfair) is\ncomputationally hard due to the combinatorial nature of both the uncertainty and decision spaces.\nLemma 1 characterizes its complexity. Proofs of all results are in the supplementary document.\nLemma 1. Problem (RCfair) is NP-hard.\n3 Price of Group Fairness\n\nIn Section 2, we proposed a novel formulation of the robust covering problem incorporating fairness\nconstraints, Problem (RCfair). Unfortunately, adding fairness constraints to Problem (RC) comes at a\nprice to overall worst-case coverage. In this section, we study this price of group fairness.\n\n4\n\n\fDe\ufb01nition 1. Given a graph G, the Price of Group Fairness PoF(G, I, J) is the ratio of the coverage\nloss due to fairness constraints to the maximum coverage in the absence of fairness constraints, i.e.,\n\nPoF(G, I, J) := 1 \n\nOPTfair(G, I, J)\nOPT(G, I, J)\n\n,\n\n(1)\n\nwhere OPTfair(G, I, J) and OPT(G, I, J) denote the optimal objective values of Problems (RCfair)\nand (RC), respectively, when I monitors can be chosen and at most J of them may fail.\nIn this work, we are motivated by applications related to social networks, where it has been observed\nthat people with similar (protected) characteristics tend to interact more frequently with one another,\nforming friendship groups (communities). This phenomenon, known as homophily [35], has been\nobserved for characteristics such as race, gender, education, etc.[23]. This motivates us to study the\nPoF in Stochastic Block Model (SBM) networks [27], a widely accepted model for networks with\ncommunity structure. In SBM networks, nodes are partitioned into C disjoint communities Nc, c 2C .\nWithin each community c, an edge between two nodes is present independently with probability pin\nc .\nBetween a pair of communities c and c0 2C , edges exist independently with probability pout\ncc0 and we\ntypically have pin\ncc0 to capture homophily. Thus, SBM networks are very adequate models for\nour purpose. We assume w.l.o.g. that the communities are labeled such that: |N1|\uf8ff . . . \uf8ff|N C|.\nDeterministic Case. We \ufb01rst study the PoF in the deterministic case for which J = 0. Lemma 2\nshows that there are worst-case networks for which PoF can be arbitrarily bad.\nLemma 2. Given \u270f> 0, there exists a budget I and a network G with N  4\nPoF(G, I, 0)  1  \u270f.\nFortunately, as we will see, this pessimistic result is not representative of the networks that are seen\nin practice. We thus investigate the loss in expected coverage due to fairness constraints, given by\n\n\u270f + 3 nodes such that\n\nc > pout\n\nPoF(I, J) := 1 \n\nEG\u21e0SBM[OPTfair(G, I, J)]\nEG\u21e0SBM[OPT(G, I, J)]\n\n.\n\n(2)\n\nWe emphasize that we investigate the loss in the expected coverage rather than the expected PoF for\nanalytical tractability reasons. We make the following assumptions about SBM network.\nAssumption 1. For all communities c 2C , the probability of an edge between two individuals in the\ncommunity is inversely proportional to the size of the community, i.e., pin\nAssumption 2. For any two communities c, c0 2C , the probability of an edge between two nodes\nn 2N c and \u232b 2N c0 is pout\nAssumption 1 is based on the observation that social networks are usually sparse. This means that\nmost individuals do not form too many links, even if the size of the network is very large. Sparsity\nis characterized in the literature by the number of edges being proportional to the number of nodes\nwhich is the direct result of Assumption 1. Assumption 2 is necessary for meaningful community\nstructure in the network. We now present results for the upper bound on PoF in SBM networks.\nProposition 1. Consider an SBM network model with parameters pin\nAssumptions 1 and 2. If I = O(log N ), then\n\ncc0 = O((|Nc| log2 |Nc|)1).\n\ncc0, c, c0 2C , satisfying\n\nc =\u21e5( |Nc|1).\n\nc and pout\n\nPoF(I, 0) = 1 \n\nPc2C |Nc|\n\nPc2C |Nc|d(C)/d(c)  o(1), where d(c) := log |Nc|(log log |Nc|)1.\n\nProof Sketch. First, we show that under Assumption 1, the coverage within each community is the\nsum of the degrees of the monitoring nodes. Then, using the assumption on I in the premise of the\nproposition (which can be interpreted as a \u201csmall budget assumption\u201d), we evaluate the maximum\ncoverage within each community. Next, we show that between-community coverage is negligible\ncompared to within-community coverage. Thus, we determine the distribution of the monitors, in the\npresence and absence of fairness constraints. PoF is computed based on the these two quantities. \u2305\nUncertain Case. Here, imposing fairness is more challenging as we do not know a-priori which\nnodes may fail. Thus, we must ensure that fairness constraints are satis\ufb01ed under all failure scenarios.\n\n5\n\n\fcc0, c, c0 2C , satisfying\n\nProposition 2. Consider an SBM network model with parameters pin\nAssumptions 1 and 2. If I = O(log N ), then\n\nJPc2C\\{C} d(c)\n(I  J) \u21e5 d(C)  o(1),\nwhere d(c) is as in Proposition 1 and \u2318 := (I  CJ)Pc2C |Nc|/d(c)1.\n\n\u2318Pc2C |Nc|\n(I  J) \u21e5 d(C) \n\nPoF(I, J) = 1 \n\nc and pout\n\n20\n\n10\n\n)\n\n%\n\nProof Sketch. The steps of the proof are similar to those in the proof of Proposition 1 with the\ndifference that, under uncertainty, monitors should be distributed such that the fairness constraints are\nsatis\ufb01ed even after J nodes fail. Thus, we quantify a minimum number of monitors that should be\nallocated to each community. We then determine the worst-case coverage both in the presence and\nabsence of fairness constraints. PoF is computed based on these two quantities. \u2305\nPropositions 1 and 2 show how PoF changes\nwith the relative sizes of the communities for\nthe deterministic and uncertain cases, respec-\ntively. Our analysis shows that without fairness,\none should place all the monitors in the biggest\ncommunity. Under a fair allocation however\nmonitors are more evenly distributed (although\nlarger communities still receive a bigger share).\nFigure 1 illustrates the PoF results in the case\nof two communities for different failure rates\n (J = I ), ignoring the o(.) order terms. We\nkeep the size of the \ufb01rst (smaller) community\n\ufb01xed and vary the size of the larger community.\nIn both cases, if |N1| = |N2|, the PoF is zero\nsince uniform distribution of monitors is opti-\nmal. As |N2| increases, the PoF increases in\nboth cases. Further increases in |N2| result in a\ndecrease in the PoF for the deterministic case:\nunder a fair allocation, the bigger community\nreceives a higher share of monitors which is\naligned with the total coverage objective. Under\nuncertainty however, the PoF is non-decreasing:\nto guarantee fairness, additional monitors must\nbe allocated to the smaller groups. This also\nexplains why PoF increases with .\n\nFigure 1: PoF in the uncertain (top) and determinis-\ntic (bottom) settings for SBM networks consisting\nof two communities (C = {1, 2}) where the size\nof the \ufb01rst community is \ufb01xed at |N1| = 20 and\nthe size of the other community is increased from\n|N2| = 20 to 10, 000. In the uncertain setting, \ndenotes the fraction of nodes that fail.\n\n0\nRelative Community Size\n\n\u03b3 value\n0.4\n0.3\n0.2\n0.1\n0\n\n(\n \ns\ns\ne\nn\nr\ni\na\nF\np\nu\no\nr\nG\n\nD\ne\nt\ne\nr\nm\ni\nn\ni\ns\nt\ni\nc\n\nU\nn\nc\ne\nr\nt\na\ni\nn\n\n0\n1.5\n\n \nf\no\n\n \ne\nc\ni\nr\nP\n\n100\n\n1.0\n\n0.5\n\n0.0\n\n500\n\n200\n\n300\n\n400\n\n \n\n4 Solution Approach\nGiven the intractability of Problem (RCfair), see Lemma 1, we adopt a conservative approximation\napproach. To this end, we proceed in three steps. First, we note that a dif\ufb01culty of Problem (RCfair) is\nthe discontinuity of its objective function. Thus, we show that (RCfair) can be formulated equivalently\nas a two-stage robust optimization problem by introducing a \ufb01ctitious counting phase after \u21e0 is\nrevealed. Second, we propose to approximate this decision made in the counting phase (which\ndecides, for each node, whether it is or not covered). Finally, we demonstrate that the resulting\napproximate problem can be formulated equivalently as a moderately sized MILP, wherein the\ntrade-off between suboptimality and tractability can be controlled by a single design parameter.\nEquivalent Reformulation. For any given choice of x 2X and \u21e0 2 \u2305, the objective FG(x, \u21e0) can\nbe explicitly expressed as the optimal objective value of a covering problem. As a result, we can\nexpress (RCfair) equivalently as the two-stage linear robust problem\n\nmax\nx2X\n\nmin\n\u21e02\u2305\n\nmax\n\ny2Y8<:Xn2N\n\nyn : yn \uf8ff X\u232b2(n)\n\n\u21e0\u232bx\u232b, 8n 2N 9=;\n\nsee Proposition 3 below. The second-stage binary decision variables y 2Y := {y 2{ 0, 1}N :\nPn2Nc\nyn  W|Nc|, 8c 2C} admit a very natural interpretation: at an optimal solution, yn = 1 if\nand only if node n is covered. Henceforth, we refer to y as a covering scheme.\n\n,\n\n(3)\n\n6\n\n\fDe\ufb01nition 2 (Upward Closed Set). A set X given as a subset of the partially ordered set [0, 1]N\nequipped with the element-wise inequality, is said to be upward closed if for all x 2X and\n\u00afx 2 [0, 1]N such that \u00afx  x, it holds that \u00afx 2X .\nIntuitively, sets involving lower bound constraints on the (sums of) parameters satisfy this de\ufb01nition.\nFor example, sets that require a minimum fraction of nodes to be available. We can also consider\ngroup-based availability and require a minimum fraction of nodes to be available in every group.\nAssumption 3. We assume that: The set \u2305 is de\ufb01ned through \u2305:= {0, 1}N \\T for some upward\nclosed set T given by T := {\u21e0 2 RN : A\u21e0  b}, with A 2 RR\u21e5N and b 2 RR.\nProposition 3. Problems (RCfair) and (3) are equivalent.\nK-adaptability Counterpart. Problem (3) has the advantage of being linear. Yet, its max-min-max\nstructure precludes us from solving it directly. We investigate a conservative approximation to\nProblem (3) referred to as K-adaptability counterpart, wherein K candidate covering schemes are\nchosen in the \ufb01rst stage and the best (feasible and most accurate) of those candidates is selected after\n\u21e0 is revealed. Formally, the K-adaptability counterpart of Problem (3) is\n\nmaximize\nyk2Y, k2K\n\nx2X\n\nmin\n\u21e02\u2305\n\nmax\n\nk2K 8<:Xn2N\n\nyk\nn : yk\n\nn \uf8ff X\u232b2(n)\n\n\u21e0\u232bx\u232b 8n 2N 9=;\n\nwhere yk denotes the kth candidate covering scheme, k 2K . We emphasize that the covering\nschemes are not inputs but rather decision variables of the K-adaptability problem. Only the value K\nis an input. The optimization problem will identify the best K covering schemes that satisfy all\nthe constraints including fairness constraints. The trade-off between optimality and computational\ncomplexity of Problem (4) can conveniently be tuned using the single parameter K.\nReformulation as an MILP. We derive an exact reformulation for the K-adaptability counterpart (4)\nof the robust covering problem as a moderately sized MILP. Our method extends the results from [39]\nto signi\ufb01cantly more general uncertainty sets that are useful in practice, and to problems involving\nconstraints on the set of covered nodes. Henceforth, we let L := {0, . . . , N}K, and we de\ufb01ne\nL+ := {` 2L : ` > 0} and L0 := {` 2L : ` \u21e7 0}. We present a variant of the generic\nK-adaptability Problem (4), where the uncertainty set \u2305 is parameterized by vectors ` 2L . Each `\nis a K-dimensional vector, whose kth component encodes if the kth covering scheme satis\ufb01es the\nconstraints of the second stage maximization problem. In this case, `k = 0. Else, if the kth covering\nscheme is infeasible, `k is equal to the index of a constraint that is violated.\nTheorem 1. Under Assumption 3, Problem (4) is equivalent to the mixed-integer bilinear program\n\n,\n\n(4)\n\nmax \u2327\ns.t.\n\n+, \u232b(`) 2 RK\n\n+ , (`) 2 K(`)\n\nyk\nnk\n\n\u2327 2 R, x 2X , yk 2Y 8 k 2K\n\u2713(`), k(`) 2 RN\n+ , \u21b5(`) 2 RR\n\u2327 \uf8ff e>\u2713(`) + \u21b5(`)>b  Xk2K:\n`k6=0yk\n`k=0 Xn2N\n. . . + Xk2K:\n\u2713n(`) \uf8ff A>\u21b5(`) + Xk2K:\n`k6=0 X\u232b2(`k)\n\u2713(`) 2 RN\n+ , \u21b5(`) 2 RR\n+, \u232b(`) 2 RK\n1 \uf8ff e>\u2713(`) + \u21b5(`)>b  Xk2K:\n`k6=0yk\n`k6=0 X\u232b2(`k)\n\u2713n(`) \uf8ff A>\u21b5(`) + Xk2K:\n\n+\n\n`k  1 \u232bk(`) + . . .\nk(`)Xn2N\nn(`) +Xk2K\nx\u232b\u232bk(`)  Xk2K:\n`k=0 X\u232b2(n)\n`k  1 \u232bk(`)\n\nx\u232b\u232bk(`) 8n 2N\n\n9>>>>>>>=>>>>>>>;\n\n8` 2L +,\n\nyk\nn\n\nx\u232bk\n\nn(`) 8n 2N\n\n8` 2L 0\n\n9>>>>>>>>>>>>>=>>>>>>>>>>>>>;\n\n(5)\n\n7\n\n\fwhich can be reformulated equivalently as an MILP using standard \u201cBig-M\u201d techniques since all\nbilinear terms are products continuous and binary variables. The size of this MILP scales with\n|L| = (N + 1)K; it is polynomial in all problem inputs for any \ufb01xed K.\nProof Sketch. The reformulation relies on three key steps: First, we partition the uncertainty set by\nusing the parameter `. Next, we show that by relaxing the integrality constraint on the uncertain\nparameters \u21e0, the problem remains unchanged. This is the key result that enables us to provide an\nequivalent formulation for Problem (4). Finally, we employ linear programming duality theory, to\nreformulate the robust optimization formulation over each subset. As a result, the formulation has\ntwo sets of decision variable: (a) The decision variables of the original problem; (b) Dual variables\n\u2305\nparameterized by ` which emerge from the dualization.\n\nBender\u2019s Decomposition. In Problem (5), once binary variables x and {yk}k2K are \ufb01xed, the\nproblem decomposes across `, i.e., all remaining variables are real valued and can be found by\nsolving a linear program for each `. Bender\u2019s decomposition is an exact solution technique that\nleverages such decomposable structure for more ef\ufb01cient solution [9, 16]. Each iteration of the\nalgorithm starts with the solution of a relaxed master problem, which is fed into the subproblems to\nidentify violated constraints to add to the master problem. The process repeats until no more violated\nconstraints can be identi\ufb01ed. The formulations of master and subproblems are provided in Section E.\nSymmetry Breaking Constraints. Problem (5) presents a large amount of symmetry. Indeed,\ngiven K candidate covering schemes y1, . . . , yK, their indices can be permuted to yield another,\ndistinct, feasible solution with identical cost. The symmetry results in signi\ufb01cant slow down of\nthe Brand-and-Bound procedure [18]. Thus, we introduce symmetry breaking constraints in the\nformulation (5) that stipulate the candidate covering schemes be lexicographically decreasing. We\nrefer to [46] for details.\n\n5 Computational Study on Social Networks of Homeless Youth\n\nWe evaluate our approach on the \ufb01ve social networks from Table 1. Details on the data are provided in\nSection A. We investigate the robust graph covering problem with maximin racial fairness constraints.\nAll experiments were ran on a Linux 16GB RAM machine with Gurobi v6.5.0.\nFirst, we compare the performance of our approach against the greedy algorithm of [45] and the\ndegree centrality heuristic (DC). The results are summarized in Figure 2 (left). From the \ufb01gure,\nwe observe that an increase in K results in an increase in performance along both axes, with a\nsigni\ufb01cant jump from K = 1 to K = 2, 3 (recall that K controls complexity/optimality trade-off of\nour approximation). We note that the gain starts diminishing from K = 2 to K = 3. Thus, we only\nrun up to K = 3. In addition the computational complexity of the problem increases exponentially\nwith K, limiting us to increase K beyond 3 for the considered instances. As demonstrated by our\nresults, K \u21e0 3 was suf\ufb01cient to considerably improve fairness of the covering at moderate price to\nef\ufb01ciency. Compared to the baselines, with K = 3, we signi\ufb01cantly improve the coverage of the\nworse-off group over greedy (resp. DC) by 11% (resp. 23%) on average across the \ufb01ve instances.\nSecond, we investigate the effect of uncertainty on the coverage of the worse-off group and on the\nPoF, for both the deterministic (J = 0) and uncertain (J > 0) cases as the number of monitors I\nis varied in the set {N/3, N/5, N/7}. These settings are motivated by numbers seen in practice\n(typically, the number of people that can be invited is 15-20% of network size). Our results are\nsummarized in Table 2. Indeed, from the table, we see for example that for I = N/3 and J = 0\nour approach is able to improve the coverage of the worse-off group by 11-20% and for J > 0 the\nimprovement in the worse-case coverage of the worse-off group is 7-16%. On the other hand, the\nPoF is very small: 0.3% on average for the deterministic case and at most 6.4% for the uncertain case.\nThese results are consistent across the range of parameters studied. We note that the PoF numbers\nalso match our analytical results on PoF in that uncertainty generally induces higher PoF.\nThird, we perform a head-to-head comparison of our approach for K = 3 with the results in Table 1.\nOur \ufb01ndings are summarized in Table 5 in Section A. As an illustration, in SPY3, the worst-case\ncoverage by racial group under our approach is: White 90%, Hispanic 44%, Mixed 85% and Other\n87%. These numbers suggest that coverage of Hispanics (the worse-off group) has increased from\n\n8\n\n\f80\n\n70\n\n60\n\n50\n\n40\n\n)\n\n%\n\n(\n \ne\ng\na\nr\ne\nv\no\nC\n \ne\ns\na\nC\n\u2212\nt\ns\nr\no\nW\n\nApproach\n\nDC\nGreedy\nK=1\nK=2\nK=3\n\nd\ne\nz\ni\nl\na\nm\nr\no\nN\n\ne\nu\nl\na\nV\n \ne\nv\ni\nt\nc\ne\nj\nb\nO\n\n0.8\n0.7\n0.6\n0.5\n0.4\n0.3\n\n50\n\nK=1\n\nK=2\n\nK=3\n\n0\n\n2\n\n4\n\n6 0 25 50 75 100 125 0\n\nSolver Time (minutes)\n\n30\n\n60\n\n90\n\n30\n\n20\nWorst\u2212Case Coverage of\nWorse\u2212Off Group (%)\n\n40\n\nFigure 2: Left \ufb01gure: Solution quality (overall worst-case coverage versus worst-case coverage of the\ngroup that is worse-off) for each approach (DC, Greedy, and K-adaptability for K = 1, 2, 3); The\npoints represent the results of each approach applied to each of the \ufb01ve real-world social networks\nfrom Table 1; Each shaded area corresponds to the convex hull of the results associated with each\napproach; Approaches that are more fair (resp. ef\ufb01cient) are situated in the right- (resp. top-)most part\nof the graph. Right \ufb01gure: Average of the ratio of the objective value of the master problem to the\nnetwork size (across the \ufb01ve instances) in dependence of solver time for the Bender\u2019s decomposition\napproach (dotted line) and the Bender\u2019s decomposition approach augmented with symmetry breaking\nconstraints (solid line). For both sets of experiments, the setting was I = N/3 and J = 3.\n\nImprovement in Min. Percentage Covered (%)\n\nPoF (%)\n\nName\n\nSize N\n\n95\n117\n118\n165\n182\n\nSPY1\nSPY2\nSPY3\nMFP1\nMFP2\nAvg. (I = N/3)\nAvg. (I = N/5)\nAvg. (I = N/7)\n\n0\n15\n20\n20\n17\n11\n16.6\n15.0\n12.2\n\n1\n16\n14\n16\n15\n12\n14.6\n13.8\n11.4\n\nUncertainty Level J\n4\n10\n8\n11\n14\n12\n11.0\n9.0\n8.2\n\n2\n14\n9\n16\n7\n10\n11.2\n14.0\n11.2\n\n3\n10\n10\n15\n11\n9\n11.0\n10.0\n11.4\n\n5\n9\n10\n10\n9\n12\n10.0\n6.7\n6.4\n\n0\n1.4\n0.0\n0.0\n0.0\n0.0\n0.3\n0.6\n0.1\n\nUncertainty Level J\n4\n1\n3.3\n1.0\n3.6\n1.2\n3.2\n3.4\n3.1\n6.3\n2.4\n1.0\n3.8\n1.9\n3.9\n2.1\n2.5\n3.5\n\n3\n1.3\n3.3\n6.4\n2.4\n2.2\n3.1\n3.2\n3.2\n\n2\n2.1\n3.7\n4.8\n5.4\n1.0\n3.4\n3.2\n3.5\n\n5\n4.2\n3.7\n4.0\n4.4\n3.6\n4.0\n3.8\n4.0\n\nTable 2: Improvement on the worst-case coverage of the worse-off group and associated PoF for\neach of the \ufb01ve real-world social networks from Table 1. The \ufb01rst \ufb01ve rows correspond to the setting\nI = N/3. In the interest of space, we only show averages for the settings I = N/5 and I = N/7. In\nthe deterministic case (J = 0), the PoF is measured relative the coverage of the true optimal solution\n(obtained by solving the integer programming formulation of the graph covering problem). In the\nuncertain case (J > 0), the PoF is measured relative to the coverage of the greedy heuristic of [45].\n\n33% to 44%, a signi\ufb01cant improvement in fairness. To quantify the overall loss due to fairness, we\nalso compute PoF values. The maximum PoF across all instances was at most 4.2%, see Table 5.\nFinally, we investigate the bene\ufb01ts of augmenting our formulation with symmetry breaking constraints.\nThus, we solve all \ufb01ve instances of our problem with the Bender\u2019s decomposition approach with and\nwithout symmetry breaking constraints. The results are summarized in Figure 2 (right). Across our\nexperiments, we set a time limit of 2 hours since little improvement was seen beyond that. In all\ncases, and in particular for K = 2 and 3, symmetry breaking results in signi\ufb01cant speed-ups. For\nK = 3 (and contrary to Bender\u2019s decomposition augmented with symmetry breaking), Bender\u2019s\ndecomposition alone fails to solve the master problem to optimality within the time limit. We would\nlike to remark that employing K-adaptability is necessary: indeed, Problem (RCfair) would not \ufb01t in\nmemory. Similarly, using Bender\u2019s decomposition is needed: even for moderate values of K (2 to 3),\nthe K-adaptability MILP (5) could not be loaded in memory.\nConclusion. We believe that the robust graph covering problem with fairness constraints is worth-\nwhile to investigate. It poses a huge number of challenges and holds great promise in terms of the\nrealm of possible real-world applications with important potential societal bene\ufb01ts, e.g., to prevent\nsuicidal ideation and death and to protect individuals during disasters such as landslides.\n\n9\n\n\fAcknowledgements\n\nWe are grateful to three anonymous referees whose comments helped substantially improve the\nquality of this paper. This work was supported by the Smart & Connected Communities program\nof the National Science Foundation under NSF award No. 1831770 and by the US Army Research\nOf\ufb01ce under grant number W911NF1710445.\n\nReferences\n[1] Sina Aghaei, Mohammad Javad Azizi, and Phebe Vayanos. Learning optimal and fair deci-\nsion trees for non-discriminative decision-making. In Proceedings of the Thirty-Third AAAI\nConference on Arti\ufb01cial Intelligence, 2019.\n\n[2] Faez Ahmed, John P. Dickerson, and Mark Fuge. Diverse weighted bipartite b-matching. In\nProceedings of the 26th International Joint Conference on Arti\ufb01cial Intelligence, pages 35\u201341.\nAAAI Press, 2017.\n\n[3] Mohammad-Javad Azizi, Phebe Vayanos, Bryan Wilder, Eric Rice, and Milind Tambe. De-\nsigning fair, ef\ufb01cient, and interpretable policies for prioritizing homeless youth for housing\nresources. In International Conference on the Integration of Constraint Programming, Arti\ufb01cial\nIntelligence, and Operations Research, pages 35\u201351. Springer, 2018.\n\n[4] Anamika Barman-Adhikari, Stephanie Begun, Eric Rice, Amanda Yoshioka-Maxwell, and\nAndrea Perez-Portillo. Sociometric network structure and its association with methamphetamine\nuse norms among homeless youth. Social science research, 58:292\u2013308, 2016.\n\n[5] Mohammad-Hossein Bateni, Yiwei Chen, Dragos F. Ciocan, and Vahab Mirrokni. Fair resource\nallocation in a volatile marketplace. In Proceedings of the 2016 ACM Conference on Economics\nand Computation, pages 819\u2013819. ACM, 2016.\n\n[6] Xiaohui Bei, Xinhang Lu, Pasin Manurangsi, and Warut Suksompong. The price of fairness\nfor indivisible goods. In Proceedings of the Twenty-Eighth International Joint Conference on\nArti\ufb01cial Intelligence, IJCAI-19, pages 81\u201387. International Joint Conferences on Arti\ufb01cial\nIntelligence Organization, 7 2019.\n\n[7] Aharon Ben-Tal, Laurent El Ghaoui, and Arkadi Nemirovski. Robust optimization, volume 28.\n\nPrinceton University Press, 2009.\n\n[8] Nawal Benabbou, Mithun Chakraborty, Xuan-Vinh Ho, Jakub Sliwinski, and Yair Zick. Diver-\nsity constraints in public housing allocation. In Proceedings of the 17th International Conference\non Autonomous Agents and MultiAgent Systems, pages 973\u2013981. International Foundation for\nAutonomous Agents and Multiagent Systems, 2018.\n\n[9] Jacques F. Benders. Partitioning procedures for solving mixed-variables programming problems.\n\nComputational Management Science, 2(1):3\u201319, 2005.\n\n[10] Dimitris Bertsimas, David B. Brown, and Constantine Caramanis. Theory and applications of\n\nrobust optimization. SIAM review, 53(3):464\u2013501, 2011.\n\n[11] Dimitris Bertsimas and Constantine Caramanis. Finite adaptability in multistage linear opti-\n\nmization. IEEE Transactions on Automatic Control, 55(12):2751\u20132766, 2010.\n\n[12] Dimitris Bertsimas and Iain Dunning. Multistage robust mixed-integer optimization with\n\nadaptive partitions. Operations Research, 64(4):980\u2013998, 2016.\n\n[13] Dimitris Bertsimas, Vivek F Farias, and Nikolaos Trichakis. The price of fairness. Operations\n\nResearch, 59(1):17\u201331, 2011.\n\n[14] Dimitris Bertsimas and Angelos Georghiou. Design of near optimal decision rules in multistage\n\nadaptive mixed-integer optimization. Operations Research, 63(3):610\u2013627, 2015.\n\n[15] Dimitris Bertsimas and Angelos Georghiou. Binary decision rules for multistage adaptive\n\nmixed-integer optimization. Mathematical Programming, 167(2):395\u2013433, 2018.\n\n10\n\n\f[16] Dimitris Bertsimas and John Tsitsiklis. Introduction to linear programming. Athena Scienti\ufb01c,\n\n1:997, 1997.\n\n[17] Dimitris Bertsimas and Phebe Vayanos. Data-driven learning in dynamic pricing using adaptive\n\noptimization. Optimization Online, 2017.\n\n[18] Dimitris Bertsimas and Robert Weismantel. Optimization over integers, volume 13.\n[19] Ilija Bogunovic, Slobodan Mitrovi\u00b4c, Jonathan Scarlett, and Volkan Cevher. Robust submodular\nmaximization: A non-uniform partitioning approach. In Proceedings of the 34th International\nConference on Machine Learning-Volume 70, pages 508\u2013516. JMLR. org, 2017.\n\n[20] Andr\u00e9 Chassein, Marc Goerigk, Jannis Kurtz, and Michael Poss. Faster algorithms for min-\nmax-min robustness for combinatorial problems with budgeted uncertainty. European Journal\nof Operational Research, 2019.\n\n[21] Chandra Chekuri, Jan Vondrak, and Rico Zenklusen. Dependent randomized rounding via\nexchange properties of combinatorial structures. In 2010 IEEE 51st Annual Symposium on\nFoundations of Computer Science, pages 575\u2013584. IEEE, 2010.\n\n[22] Vincent Conitzer, Rupert Freeman, Nisarg Shah, and Jennifer W. Vaughan. Group fairness for\nthe allocation of indivisible goods. In Proceedings of the 33rd AAAI Conference on Arti\ufb01cial\nIntelligence (AAAI), 2019.\n\n[23] Sergio Currarini, Matthew O. Jackson, and Paolo Pin. An economic model of friendship:\n\nHomophily, minorities, and segregation. Econometrica, 77(4):1003\u20131045, 2009.\n\n[24] Hadi Elzayn, Shahin Jabbari, Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, and\nZachary Schutzman. Fair algorithms for learning in allocation problems. In Proceedings of the\nConference on Fairness, Accountability, and Transparency, pages 170\u2013179. ACM, 2019.\n\n[25] Paul Erd6s. On the evolution of random graphs. Publ. Math. Inst. Hungar. Acad. Sci, 5:17\u201361,\n\n1960.\n\n[26] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM (JACM),\n\n45(4):634\u2013652, 1998.\n\n[27] Stephen E. Fienberg and Stanley S. Wasserman. Categorical data analysis of single sociometric\n\nrelations. Sociological methodology, 12:156\u2013192, 1981.\n\n[28] Benjamin Fish, Ashkan Bashardoust, Danah Boyd, Sorelle Friedler, Carlos Scheidegger, and\nSuresh Venkatasubramanian. Gaps in information access in social networks? In The World\nWide Web Conference, pages 480\u2013490. ACM, 2019.\n\n[29] Alan Frieze and Micha\u0142 Karo\u00b4nski. Introduction to random graphs. Cambridge University Press,\n\n2016.\n\n[30] Edgar N Gilbert. Random graphs. The Annals of Mathematical Statistics, 30(4):1141\u20131144,\n\n1959.\n\n[31] Grani A. Hanasusanto, Daniel Kuhn, and Wolfram Wiesemann. K-adaptability in two-stage\n\nrobust binary programming. Operations Research, 63(4):877\u2013891, 2015.\n\n[32] Michael Isaac, Brenda Elias, Laurence Y. Katz, Shay-Lee Belik, Frank P. Deane, Murray W.\nEnns, Jitender Sareen, and Swampy Cree Suicide Prevention Team (12 members). Gatekeeper\ntraining as a preventative intervention for suicide: a systematic review. The Canadian Journal\nof Psychiatry, 54(4):260\u2013268, 2009.\n\n[33] Jon Kleinberg, Yuval Rabani, and \u00c9va Tardos. Fairness in routing and load balancing. In 40th\nAnnual Symposium on Foundations of Computer Science (Cat. No. 99CB37039), pages 568\u2013578.\nIEEE, 1999.\n\n[34] Amanda Kube, Sanmay Das, and Patrick Fowler. Allocating interventions based on predicted\noutcomes: A case study on homelessness services. In Proceedings of the AAAI Conference on\nArti\ufb01cial Intelligence, 2019.\n\n11\n\n\f[35] Miller McPherson, Lynn Smith-Lovin, and James M. Cook. Birds of a feather: Homophily in\n\nsocial networks. Annual review of sociology, 27(1):415\u2013444, 2001.\n\n[36] Kaname Miyagishima. Fair criteria for social decisions under uncertainty. Journal of Mathe-\n\nmatical Economics, 80:77\u201387, 2019.\n\n[37] James B. Orlin, Andreas Schulz, and Rajan Udwani. Robust monotone submodular func-\ntion maximization. In International Conference on Integer Programming and Combinatorial\nOptimization, pages 312\u2013324, Waterloo, Canada, 2016. Springer.\n\n[38] Krzysztof Postek and Dick den Hertog. Multistage adjustable robust mixed-integer optimization\nvia iterative splitting of the uncertainty set. INFORMS Journal on Computing, 28(3):553\u2013574,\n2016.\n\n[39] Aida Rahmattalabi, Phebe Vayanos, and Milind Tambe. A robust optimization approach\nIn International\n\nto designing near-optimal strategies for constant-sum monitoring games.\nConference on Decision and Game Theory for Security, pages 603\u2013622. Springer, 2018.\n\n[40] Ab Rashid Ahmad, Zainal Arsad Md Amin, Che Hassandi Abdullah, and Siti Zarina Ngajam.\nPublic awareness and education programme for landslide management and evaluation using a\nsocial research approach to determining \u201cacceptable risk\u201d and \u201ctolerable risk\u201d in landslide risk\nareas in Malaysia. In Kyoji Sassa, Matja\u017e Miko\u0161, and Yueping Yin, editors, Advancing Culture\nof Living with Landslides, pages 437\u2013447. Springer International Publishing, 2017.\n\n[41] John Rawls. A theory of justice. Harvard university press, 2009.\n[42] Erel Segal-Halevi and Warut Suksompong. Democratic fair allocation of indivisible goods. In\nProceedings of the 27th International Joint Conference on Arti\ufb01cial Intelligence, pages 482\u2013488.\nAAAI Press, 2018.\n\n[43] Warut Suksompong. Approximate maximin shares for groups of agents. Mathematical Social\n\nSciences, 92:40\u201347, 2018.\n\n[44] Alan Tsang, Bryan Wilder, Eric Rice, Milind Tambe, and Yair Zick. Group-fairness in in\ufb02uence\nmaximization. In Proceedings of the Twenty-Eighth International Joint Conference on Arti\ufb01cial\nIntelligence, IJCAI-19, pages 5997\u20136005, 2019.\n\n[45] Vasileios Tzoumas, Konstantinos Gatsis, Ali Jadbabaie, and George J Pappas. Resilient mono-\ntone submodular function maximization. In 2017 IEEE 56th Annual Conference on Decision\nand Control (CDC), pages 1362\u20131367. IEEE, 2017.\n\n[46] Phebe Vayanos, Angelos Georghiou, and Han Yu. Robust optimization with decision-dependent\n\ninformation discovery. Available on Optimization Online.\n\n[47] Phebe Vayanos, Daniel Kuhn, and Ber\u00e7 Rustem. Decision rules for information discovery in\nmulti-stage stochastic programming. In 2011 50th IEEE Conference on Decision and Control\nand European Control Conference, pages 7368\u20137373. IEEE, 2011.\n\n[48] Jean Walrand. Lecture notes on probability theory and random processes. 2004.\n[49] \u02d9Ihsan Yan\u0131ko\u02d8glu, Bram L. Gorissen, and Dick den Hertog. A survey of adjustable robust\n\noptimization. European Journal of Operational Research, 277(3):799\u2013813, 2019.\n\n[50] Chongjie Zhang and Julie A. Shah. Fairness in multi-agent sequential decision-making. In\nZ. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances\nin Neural Information Processing Systems 27, pages 2636\u20132644. Curran Associates, Inc., 2014.\n\n12\n\n\f", "award": [], "sourceid": 9226, "authors": [{"given_name": "Aida", "family_name": "Rahmattalabi", "institution": "University of Southern California"}, {"given_name": "Phebe", "family_name": "Vayanos", "institution": "University of Southern California"}, {"given_name": "Anthony", "family_name": "Fulginiti", "institution": "University of Denver"}, {"given_name": "Eric", "family_name": "Rice", "institution": "University of Southern California"}, {"given_name": "Bryan", "family_name": "Wilder", "institution": null}, {"given_name": "Amulya", "family_name": "Yadav", "institution": "Pennsylvania State University"}, {"given_name": "Milind", "family_name": "Tambe", "institution": "USC"}]}