NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:1798
Title:Distinguishing Distributions When Samples Are Strategically Transformed

Reviewer 1


		
The paper introduces a new model of strategic classification. Given some type-dependent feature distribution, agents can transform these features into "signals" according to some background graph. The principal then classifies agents based on the signals. The model is a departure from many recent models of strategic classification, which frame agents as maximizing utility subject to a cost function penalty, and also deals with the setting of repeated samples from agents rather than a one-shot game. This has the benefit of elucidating the importance of differences in the agents' initial feature distribution (in terms of DTV) that may be intuitively true, but has not been captured in recent work. On the other hand, the paper is missing a more thorough comparison between these different perspectives. I find the perspective of manipulating features more natural for many strategic classification problems, and the machine learning paper example in the paper is pretty toy. Are there more natural settings when adopting this graph perspective makes sense? In these cases, where does the background graph come from? Within this graph-theoretic model, the paper presents a complete structural results for when successful classification is/is not possible in terms of DTV for general graphs, as well as a hardness result for testing whether DTV is non-zero, and so above chance classification is possible, or not. The sample-complexity result is less exciting and follows almost immediately from Hoeffding's inequality once the definitions are in place. It also assumes access to a separating set, which the Theorem 2 shows is NP-Hard to compute in general. Given the general hardness results, I appreciated Section 5, which deals with "partially ordered" signals and shows, for some graphs, efficient algorithms are possible. However, the motivation for "partially ordered" signals is weak and de-emphasizes the role of graph structure in determining whether inference is possible. I would have appreciated more discussion on what types of graph structures make classification possible in principle--- is there a more general characterization? Overall, the paper is fairly clear and well-written, though I found the introduction too heavy-handed. I am not entirely convinced by the utility of the modeling formalism advocated in this paper for strategic classification problems. Within this formalism, the structural results neatly characterize when inference is possible in general, and the NP-Hardness results stress the need to consider graphs with more structure. The example of partially ordered signal sets nicely shows the benefits of structure, though a broader characterization of what types of structures make classification possible would significantly strengthen this paper. After rebuttal: Thanks to the authors for their thoughtful response. I still think the paper would benefit from a clearer comparison between modeling approaches to strategic classification. I also think the commentary on the revelation principle provided in the rebuttal should be expanded in Section 5. I am, however, raising my score because I was persuaded of the potential applicability of the model through the given examples, and I appreciate the structural characterization provided in terms of new quantities like DTV and MaxSep that allows you to clearly reason about this model.

Reviewer 2


		
The authors propose a setting where agents receive a sample (e.g. a research idea), and they can transform samples into one of multiple signals (e.g. a conference). The ability to convert certain samples into signals is given by a bipartite graph. There are two types of agents, “good” and “bad”, which both try to appear as being good, while the goal for the paper is to come up with an approach (requiring as few samples as possible) to differentiate between the two. The authors show that the sample complexity to differentiate between good and bad is mild and dependent on the directed total variational distance (d_DTV) between the good and bad distributions. On the other hand, it’s NP-hard to distinguish if the d_DTV is close to, or identical to, 0. Finally, the authors show that in a special case where signals are partially ordered, the sample complexity goes down and the hardness result disappears. The paper proposed a nice model to study learning from strategically manipulated samples. The results in this model are exciting, and point at interesting potential follow-ups. After author response: Since I had raised no serious concerns, the authors feedback did not need to address my comments, hence my review and score remains the same.

Reviewer 3


		
This is a solid paper. The model studied in the paper is well-motivated and has many applications. I found the example of hiring ML researchers in the introduction very helpful. It provides a clear description and motivation of the model. I like the equivalence between DTV and MaxSep. In particular, it implies that the principal’s optimal policy has a simple form: just binary classify the signals into “good” and “bad” then count the number of “good” signals to decide whether the distribution is good. Even though the principal can use much more complex policies, it is surprising that the optimal policy has such an intuitive form. TYPOS Line 477 in Page 13: ">= exp(-T*epsilon^2/2)" should be "<=".