__ Summary and Contributions__: This paper presents an approach for better satisfying equalized odds fairness requirements by resampling sensitive attributes to satisfy equalized odds and then using the divergence from this distribution in the training optimization. The paper also presents a statistical test for equalized odds based on similar sensitive attribute resampling. It then show experimentally that the proposed approach achieves lower error and better performance on the statistical test for equalized odds.

__ Strengths__: The approach is quite general; the paper addresses regression and multiclass classification, which demonstrate this generality.
The contribution of a general statistical test for equalized odds is also valuable.
The topic of fairness for machine learning is very important in the ML community as methods are being increasingly applied to social prediction/decision-making tasks.

__ Weaknesses__: While only assessing p-values may match realistic deployments, since the full ground truth is available for the experiments, it would be useful to also characterize the actual equalized odds violations. Since many hyperparameters typically exist for all methods, providing results showing the trade-offs between predictive loss and equalized odds violations using varying hyperparameter values would be beneficial.

__ Correctness__: The paper appears to be correct.

__ Clarity__: The paper is well written.

__ Relation to Prior Work__: The paper compares against state-of-the-art methods for equalized odds in regression and multi-class prediction.

__ Reproducibility__: Yes

__ Additional Feedback__:

__ Summary and Contributions__: This paper presents a framework for learning predictive models that approximately
satisfy the equalized odds notion of fairness both in regression and multi-class classification problems. They introduce a discrepancy functional that measures the violation of equalized odds. They also develop a hypothesis test to detect whether a prediction rule violates equalized odds. Empirically, their approach can achieve a similar fairness guarantee as baselines while the accuracy of their approach is higher.

__ Strengths__: This paper investigates a classic fairness notion: equalized odds, and consider the real-valued or multi-class setting. The idea of resampling sensitive attributes is interesting.

__ Weaknesses__: The proposed framework is heuristic and does not have a theoretical guarantee. The hypothesis test seems to just follow from the Holdout Randomization Test [1]. Also, I am not sure whether their proposed hypothesis test is better than the existing notion of equalized odds, which needs to be explained more clearly.
Thank you for the response. I have no problem for the first two points. For the third point, I may not explain my question clearly before. My question is that in the literature, we can simply measure the violation of equalized odds by additive or multiplicative violations among groups. What are the advantages of hypothesis test compared to these simple ways? Also, I have mentioned the missing of comparison with causality literatures. I wish to see the response of this point. Then I can raise my score.

__ Correctness__: The claims, method, and empirical methodology are correct.

__ Clarity__: The paper is well written. From my view, I wish the authors state their novelty in the abstract more clearly. What is the main contribution on the model - the extension to the real-valued or multi-class response?

__ Relation to Prior Work__: The comparison between their hypothesis test and the existing notion of equalized odds should be clear.
Their idea seems to relate to the notion of causality, which has been investigated a lot in fair literature recently. A discussion with existing causality work is needed as I think.

__ Reproducibility__: Yes

__ Additional Feedback__:

__ Summary and Contributions__: The paper addresses algorithmic fairness setting with continual and multi-class outcomes. The authors propose to randomise/flip the values of the sensitive feature A to achieve equalised odds. This is in contrast to other fairness methods that perform flipping/resampling of the target labels Y, e.g. Kamiran and Calders [22]. The authors proposed to draw pseudo sensitive features as samples from p(A|Y) and use them instead of real sensitive features.

__ Strengths__: Clarity of presentation is satisfactory. Experiments are presented and the baselines are appropriate.
Randomising sensitive features is an intuitive idea that has been studied before. The intuition is that the target predictions should be independent given a random protected characteristic value. However, there are concerns with this approach discussed below.

__ Weaknesses__: Intuitively randomising sensitive feature should lead to fairer results, however, fairness though unawareness poses a risk of unfairness by proxy as there are ways of predicting protected characteristic features from other features [Ruggieri et all, 2010, Adler et al 2016]. Also a continuous analog of fairness through unawareness [Dwark et al 2012] has been proposed via counterfactual fairness [Matt J. Kusner, et al, Counterfactual fairness, 2017]. In the counterfactual fairness, one has to estimate a dependency structure over the features, i.e. a causal graph, in order to create a counterfactual example when changing/flipping observational sensitive feature.
To properly evaluate the contribution of the proposed approach, it has to be compared —methodologically and empirically — not only to fairness through unawareness, but also to counterfactual fairness approaches.
Another concern is that very little information is dedicate to the analysis how to estimate p(A|Y). The described strategy in (4) could work for binary sensitive features, but how to generalise it to the continual features is not described.
I wonder whether the assumption of equalised odds (2) being achieved automatically (because we sample sensitive features conditioned on the true labels Y and not the predictions Y_hat) excludes/discourages a perfect predictor, when Y_hat = Y.
Minor comments:
There is a lot of discussion on the discrepancy measure estimation in Section 2.2. It is surprising the authors did not utilise MMD GAN [Li et al, 2017, Binkowski et al 2018] in the proposed approach. Seems like a missed opportunity.
----------------------
Thank you for providing a rebuttal. I would like to confirm to the authors that I am fully aware of the work [6] Moritz Hardt, Eric Price, and Nati Srebro: Equality of opportunity in supervised learning, NeurIPS 2016.
In my opinion, the most critical points (in this review and the one provided by the R2) regarding the background literature on causal learning and the counterfactual work (building a causal model and intervening on the sensitive attribute) have not been addressed. Let me recap once again: "To properly evaluate the contribution of the proposed approach, it has to be compared —methodologically and empirically — to counterfactual fairness approaches". The authors did not respond in any constructive way to this criticism.
Secondly, I thought that the authors have compared their results to fairness through unawareness methods, when we do not use sensitive feature for training and testing the models. But after reading the rebuttal, I left uncertain whether the baseline methods entitled as "fairness unaware baselines" use sensitive information during training or not. Given the proposed approach is trying to change the sensitive feature, this is a crucial baseline to compare to.
Finally, it has been stated "Certainly, other forms of randomization have been studied before, but randomizing [sensitive feature] conditionally on the observed Y is the crucial idea necessary to promote equalized odds in model fitting." We should be more mindful of the negative implications this approach might have in the context *beyond equalised odds*. The proposed method attempts to modify the sensitive information during training (fair dummies), and does not take responsibility to explain the model/outcomes. In contrast, in the causal models, when intervening on the sensitive feature, we can create counterfactual explanations that are mindful about the features and aim to explain the decisions.

__ Correctness__: Appear so.

__ Clarity__: Satisfactory.

__ Relation to Prior Work__: Some concerns are described in the comments above.

__ Reproducibility__: Yes

__ Additional Feedback__:

__ Summary and Contributions__: The authors present an approach to developing fair predictive models. They adapt a number of works from the conditional independence testing and regularization literature to the equalized odds setting. They introduce three main techniques: (i) a regularizer, (ii) a hypothesis testing procedure, (iii) a confidence interval method. They show in experiments that the model performs favorably (in a fairness sense) to other models from the literature.

__ Strengths__: The paper is well-written. It uses principled methods and thinks mostly rigorously about the correct way to approach equalized odds. The solutions proposed are practical, flexible, and general.

__ Weaknesses__: The novelty here may be a little low. I am not personally familiar with the fairness literature, but I am very familiar with conditional independence testing methods. Given that, if someone had told me that the objective was to achieve equation (1) in the paper, I probably would have proposed the same tools that the authors suggest. That being said, I'm not sure it's a bad thing. Sometimes just writing down a problem in a clear way, such that the solution just falls out, is the real contribution. I could believe that to be the case here as well.
One other minor critique would be the authors' use of conformal inference for confidence intervals. In the fairness case, it seems like people are likely not going to be happy with the group-level confidence intervals that conformal methods provide. But again, I know nothing about this literature. If the community feels okay with that notion of uncertainty then it's a reasonable method. Still, there should be some (brief) discussion of the difference between frequentist confidence intervals and conformal intervals.

__ Correctness__: Everything seems correct to me.

__ Clarity__: Yes.

__ Relation to Prior Work__: Yes.

__ Reproducibility__: Yes

__ Additional Feedback__: