NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:2889
Title:Characterizing Bias in Classifiers using Generative Models


		
The paper proposes a method to study certain types of biases in the data-generating mode, which could, for example, translate to discrimination and unfairness in the classification setting. The reviewers agree with the importance and relevance of the proposed framework. Personally, I found the whole narrative a bit surprising, or unusual, since there is not one unique problem of “bias”, but multiple types of biases, which are largely acknowledged in the causal inference literature. For instance, (Bareinboim and Pearl, Proc. of the National Academy (PNAS), 2016) survey different types of biases such as confounding, selection, among others. In particular, if I understood the paper correctly, the authors are really discussing the mismatch between the proportion of units sampled to the study versus of the underlying population relative to certain features, which in the sciences is called (sampling) selection bias. In order to avoid readers to get confused, I would try to be more specific in the title and add a short discussion articulating the specific type of bias considered in the proposed work. Obviously, this is not to discredit in any way the technical merit of the proposed contribution.