NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:7134
Title:Provably robust boosted decision stumps and trees against adversarial attacks

Reviewer 1


		
--- Summary --- Contributions: This paper studies the adversarial example problem for ensembles of decision trees and decision stumps (trees with a single internal node). As a main contribution, the authors derive an exact attack algorithm on ensembles of decision stumps for \ell_\infty perturbations. In contrast, this problem is known to be NP-Hard for trees with at least 3 internal nodes, via previous work by Kantchelian et. al., reference [22] in the present paper. The current work uses this attack to derive an optimal robust training method for stump ensembles. They also provide an approximate upper bound on the robust test error for tree ensembles, which improves upon the naive upper bounds while being efficiently computable. Quality: My main concern about the present work is a lack of detailed comparison with two very relevant prior works, which are references [7] and [22] in the present paper. For one, there is no stand-alone related work section, and comparisons to previous work are only made in passing. The bigger issue is that the experimental claims in this paper are not compared to strong enough baselines, even though previous work is relevant enough to warrant a direct comparison. Specifically, the robust decision trees in [7] should be evaluated side-by-side with the current algorithms, as they may have much better robust accuracy empirically. Similarly, the attack algorithm in [22] is quickly ruled out as not scaling to large datasets due to using a MILP formulation. However, in the case of decision stumps, the MILP may very well be solvable quickly, especially on small datasets. In fact, the previous work [7] solves the MILP exactly to attack tree ensembles over three datasets that are also used in the present paper: diabetes, cod-rna, and breast-cancer. Therefore, it would be much more meaningful to establish the exact robust test accuracy, and to compare the present robust models to the approximate robust decision trees of the previous work. Clarity: The paper is generally well-written and structured clearly. The main theoretical results are clearly and soundly justified. The experimental set-up and results are also very clear and concise. Significance: As mentioned above, it is difficult to judge the significance of the present work without a better comparison to previous work. On a different note, the focus on decision stumps is also only moderately motivated by their interpretability. The core reason that the new attack and the new robust training is tractable is that decision stumps are separable due to each stump only depending on a single coordinate (and because the perturbations are in the \ell_\infty norm). Therefore, the theoretical results in the paper are not especially deep, even though they are certainly novel and interesting. --- Review --- Pros: - The paper presents interesting ideas on how to construct provably robust decision stumps, which was previously unknown in the literature. - This follows from a new exact attack algorithm for stump ensembles, which uses a simple but elegant observation about the separability for \ell_\infty perturbations. - The authors also present an approximate certificate on the robust accuracy of tree ensembles, which was also previously unknown, and it utilizes the new result for stumps (applying a greedy, coordinate-based approach). - The results are clearly described. The writing is well done. Cons: - Some more motivation for focusing on decision stumps would be nice. They are quite a restricted class of tree ensembles. Why is it important to understand adversarial robustness for stump ensembles? Alternatively, can the ideas in the present paper be extended to a richer class of boosted models? - Many of the results should be explained in the context of the previous work [7], which achieves quite strong results empirically in the same setting as the current work. The work [7] is also on robust decision trees, and it appeared in ICML 2019. Therefore, an experimental comparison with [7] is warranted but missing from this submission. - In particular the present authors say "While the approach of [7] leads to tree ensembles with improved empirical and certified robustness, they have no approximation guarantee on the robust loss or robust error during training time. In contrast we show how to derive an upper bound on the robust loss for tree ensembles based on our results for an ensemble of decision stumps and we show how that upper bound can be minimized during training." So the natural question is how do the robust decision trees in the previous work [7] compare with the results in the present work? Is one or the other more robust/accurate on some datasets, so is there a trade-off for certifiability? - It should be possible to run the MILP in [22] for at least three datasets in the present work, which would lead to a better comparison of the exact versus approximate robust accuracy of tree ensembles (as is done in the previous work [7] for the same datasets). - A standard approach to construct robust classifiers is adversarial training. In particular, [22] shows that performing many rounds of adversarial training in the same norm as the attack may be an effective way to create robust tree ensembles (although [22] only evaluates \ell_0 distance). The present work does not compare against any adversarial training approaches, which I believe is a serious shortcoming and should be addressed as it may be a competitive baseline. After author feedback: I will increase my score to a 6, given the thoroughness of the feedback. The new experiments and comparison to [7] and [22] are enlightening and better showcase the improvements of the present work. Please include much more details about the importance of decision stumps (e.g. interpretability) in the final version, as well as the experiments and discussion from the feedback.

Reviewer 2


		
The authors proposed a provable solution to adversarial training of decision stumps and tree ensembles and also exact and upper bounds of the "robust test error". This problem is hard in general for arbitrary complicated classifiers, because the inner maximization in adversarial training is a non-concave problem in general. But the authors show that the inner maximization is concave in case of boosted decision stumps by exploiting the fact that the objective function is piecewise constant and one could perform maximization by sorting the "boundary points" in the piecewise constant function. For boosted decision trees, one could rely on upper bounds on the robust error, which are found to be tight empirically. Empirical results show that the model is indeed provably robust for reasonably sized perturbation (l_inf norm is considered). The paper overall reads pretty well and the idea sounds interesting. But I have some concerns regarding the evaluations: 1) It would make sense to compare the method against [7] at least for the datasets that are not too big. This could better highlight advantages of using the proposed method over [7]. 2) As another baseline, it would be insightful to compare the results against other classifiers that are adversarially trained (that use approximations in the inner optimization), for example using Projected Gradient Ascent. This could highlight the importance of how solving the inner optimization accurately could lead to better robustness. 3) The training computational complexity is O(n^2 T log T), which means that O(n) passes over the entire dataset may be needed to train the classifier. This may limit practical use of the proposed method for huge datasets, where O(n^2) could be prohibitive. Is there a way to bring the complexity down to less than O(n^2) running time?

Reviewer 3


		
This paper seeks to bridge two areas - the use of boosting and tree-like models as well as robustness in machine learning. They show that for such models, it is possible to derive the optimal robust loss (for decision stumps) or an upper bound on the robust loss (for decision trees) and also propose a training procedure for each type of model that shows empirical improvements in robust accuracy. The mathematical analysis of the proposed loss computation algorithm and the related training algorithm is solid and seems correct. The authors do make some simplifications such as assuming the loss function is an exponential loss, but claim (without proof) that the analysis also applies to other strictly monotonically decreasing loss functions such as logistic loss. The math is explained well in a step by step manner. Still, I would have found it clearer if it had an “algorithm” section that lists out the steps of the algorithm described in Section 3.2 in bullet point form. This would also make the runtime complexity analysis more clear. On the empirical side, the results show an improvement in robust test accuracy by using the proposed algorithm. The improvement is clear and it is done over a range of different datasets. However, I would focus more on the interpretability of the resulting models. In particular, I liked the discussion in Figure 2 analyzing exactly how robust boosted stumps on MNIST 2-6 split differently from regular boosted stumps. Interpretability is one of the main advantages of these types of models, so I would be more interested in seeing even more analysis on this front. It is okay to place the analysis in the Appendix, but I would be particularly interested in seeing if the robust and interpretable trees say anything interesting about the health-related datasets. Otherwise, if you ignore the interpretability of such models, the robustness result on MNIST is easily surpassed by deep neural network models. Additionally, on the empirical side, I believe it is worth comparing results directly to [7] whenever possible, as the work of [7] seems quite similar. While there are differences in the approach taken to train such robust models and obtain robust loss guarantees, both approaches do obtain robust loss guarantees by the end. Thus, having a side-by-side comparison of the results whenever a fair comparison is possible seems important to include (again, having this in the Appendix is okay too). Finally, a small interesting note the authors mention is that a simple sampling method tends to result in a close approximation to the true robust test error for boosted trees for small datasets like the ones used in the paper. Thus, it may be worth having a training method based on sampling to compute the robust loss as another baseline algorithm to compare to. This is akin to having a (possibly weak) adversarial training procedure as a baseline for provable robustness in deep neural networks. Overall, the paper has room for improvement (especially on the empirical analysis side), but addresses an interesting question in a mathematically sound way. Small comments Line 9 - “Up to our knowledge” -> “To the best of our knowledge” Line 71 - “and allows to see” -> “and allows us to see” Line 77 - what is N_t? Line 99-100 - “Our goal … are provable guarantees” -> “Our goal … is provable guarantees” Line 105 - “in original Adaboost” -> “in the original Adaboost” Line 105 - “successfully in object detection” -> “successfully used in object detection” Line 135 - It maybe worth mentioning somewhere closer to the beginning of section 3.2 that you’ll first describe what to do for a specific coordinate j, then choose which coordinates j you will use later (you discuss this in lines 163-165). Again, this could also be improved with an “algorithm” section that has an algorithm with bullet points. Line 243 - “to judge about robustness” -> “to judge robustness” Line 264 - “which allows to” -> “which allows us to” *** After Author Response *** I am satisfied with the additional experimental results presented in the author response. Again, as discussed previously, I hope the authors do indeed have a longer discussion on interpretability as that is probably one of the main advantages of using decision trees/stumps. I will keep my score as an accept.