NeurIPS 2020

A Class of Algorithms for General Instrumental Variable Models


Meta Review

The work provides a method based on modern machine learning for bounding causal effects under the instrumental variable graph and when both treatment and outcome variables are continuous. Overall, reviewers were positive about the paper, and I share the general assessment, this is a very nice and strong piece of work. Having said that, I will list some serious issues I found when reading the paper (the not so good part), which I expect the authors will take into account and reflect in the camera-ready version of the paper First, the paper’s contribution is overstated, which is not needed due to the high quality of the work (!). For instance, the author says (line 35-36): “In this work, we develop algorithms to compute these bounds on causal effects over all IV models compatible with the data in a general continuous setting. “This is misleading since the work doesn’t consider the most general setting. In particular, assumptions are made about the latent space of the exogenous variables. The assumptions may be reasonable or unreasonable, depending on the context, but they do not solve the most general setting (more below). This issue is exacerbated given the papers says: “One of the major obstacles to trustworthy causal effect estimation with observational data is the reliance on the strong, untestable assumption of no unobserved confounding. “That’s somewhat ironic since the parametrization and corresponding optimization procedure proposed in the paper is valid exactly *because* of such assumptions about the exogenous latent space (!). In other words, the narrative used in the introduction is inaccurate and needs to be improved; the real contribution of the paper needs to be stated more clearly. Furthermore, the comparison with Balke & Pearl, 1994 (henceforth, BP), the canonical result in the field, is misleading since the main strength of BP’s approach avoids imposing *any* parametric constraint over the latent space. In fact, BP is able to do so by constructing a partitioning of the latent space that is universal BUT only works when the endogenous variables are discrete. At first, I thought the paper solved this problem and would offer a counterpart construction for the continuous domain. This was not the case, and the problem was solved by imposing constraints over the latents. There exist a recent attempt to bound continuous effects when Y is continuous, but X is still discrete, which to the best of my knowledge, is also universal as BP’s construction for the discrete case (link: Columbia CausalAI Laboratory, Technical Report (R-61), Zhang and Bareinboim, 2020, https://causalai.net/r61.pdf). I recommend the authors check this result, understand the subtlety involving, and add a short comparison. Again, to the best of my knowledge, it’s not known how to parametrize continuous models in generality, a la BP, when both treatment and outcomes are continuous. It’s okay to add assumptions, but needs to as explicit and transparent as possible about them. Last but not least, the actual causal effect in the simulations (Fig 2, Row 2, Column 1) lie outside the derived bounds, completely off! This issue was explained to Reviewer 5 during the rebuttal stage due to finite samples, but it’s not clear at all in that’s the case. Again, I wouldn’t dismiss the fact that the latent space’s parametrization may be entirely wrong. I tried to avoid using my personal opinion, but I feel it’s extremely dangerous to allow one to impose parametric constraints over the unobservable without any guidance or way of judging its plausibility. Naturally, this wouldn’t happen if the partitioning was universal. Overall, this is a nice piece of work with application in core causal inference and reinforcement learning, therefore, my recommendation is ‘accept.’