__ Summary and Contributions__: The authors present a generalization of randomized smoothing to arbitrary parameterized transformations: rotations or translations in the image domain, and scaling in the audio domain. They provide three flavors of this: 1) a heuristic defense that yields no certificate, 2) a distributional guarantee where certification is applied based on parameters inferred from the training set, and 3) an online defense which can provide a heuristic defense and certification for individual test examples. Much care is taken to account for the interpolation error induced by image representations. Experiments are performed on MNIST/CIFAR/ImageNet demonstrate that this technique is scalable and provides nontrivial bounds.

__ Strengths__: Robustness certification to adversarial examples typically considers the case where an adversary is only allowed to add L_p bounded noise, and typically is not scalable, except for the randomized smoothing approaches. This work extends those results by considering adversaries that can perform any parameterized transformation and is, as expected, scalable to large images/networks. The consideration of interpolation error and care taken to apply standard interval-analysis techniques to inversion of rotations is novel and interesting.
The experimental section is very thorough and accurately demonstrates the scalability/flexibility of the presented technique. The authors provide a fair and honest discussion of the drawbacks of their approach, including a broad characterization for datasets for which their method will be weak.

__ Weaknesses__: The primary theoretical contribution, while novel, is a fairly incremental improvement from the original analysis presented by Cohen et al. While the authors claim that this method is easily generalizable to arbitrary parameterized transformations, they restrict their attention to transformations that are already well-studied, such as rotations and translations, omitting transformations such as spatial transformations (stAdv/Wasserstein AdvEx) that have high parameter dimension. Indeed, on such highly parameterized transformations, it is unclear how the inverse may be calculated efficiently (i.e., could the interval splitting approach on line 145 work?). Further, while the authors offer discussions on competing methods, they do not explicitly compare against these methods on CIFAR. Finally, it seems like many tricks were applied to make their method work well (e.g. vignette, gaussian blur); it would be interesting to see results without the highly optimized flavor of the proposed technique.

__ Correctness__: The theoretical results seem correct and fairly standard. The experimental section is quite thorough and appears standard and well documented. Code was provided along with this submission.

__ Clarity__: I found the paper fairly hard to follow in general. Better signposting throughout would help more clearly organize the paper. For example, more explicitly discussing the need for a pre-smoothed classifier in 4.1 would have been helpful. My personal preference is for exposition using symbols/formulas rather than examples, as in section 5.3. There are also several small typos (e.g.)
line 117: E>=eps(...) should be ||E|>=eps(...)
line 156: RHS should have a max over i
line 285: bellow -> below

__ Relation to Prior Work__: The authors clearly describe previous techniques in robustifying/certifying neural networks to rotations/translations. As mentioned in the weaknesses section, it would be helpful to have direct comparison of their experimental results to previous techniques using all the tricks (vignette/gaussian blur) they applied as well.

__ Reproducibility__: Yes

__ Additional Feedback__:

__ Summary and Contributions__: This paper introduces a generalization of randomized smoothing to derive a provable defense against parametrized image transformations.
It introduces robustness certification guarantees both at the distributional level (whole dataset) and individual level (per image).
They achieve provable distributional robustness to rotation based adversarial attacks.

__ Strengths__: The paper introduces a generalization of randomized smoothing to derive individual and distributional robustness certificates that can scale to the size of Imagenet.
In addition, they derive a novel mechanism to handle the interpolation errors resulting from image transformations.

__ Weaknesses__: The paper suffers from the same problems as randomized smoothing. For inference, the number of samples that need to be computed can be very large.
For CIFAR, errors due to interpolation tend to be high.

__ Correctness__: The theoretical claims and empirical methodology seem to be correct.

__ Clarity__: The paper is very well written and easy to follow.

__ Relation to Prior Work__: The work clearly discusses the prior work and the pros and cons of this work compared to previous works.

__ Reproducibility__: Yes

__ Additional Feedback__:

__ Summary and Contributions__: In this paper, the author presents a certified defense method against adversarial image transformations.
The presented method is based on random smoothing thus can scale to large DNN models and datasets.

__ Strengths__: 1. This method can be applied to a wide range of application domains and image transformations.
2. This method can scale to large datasets and complex DNN models.
3. The author presents defense solutions for both distributional and individual settings.

__ Weaknesses__: 1. Most of the results lack a comprehensive comparison with previous methods (only some brief descriptions in comparison to other work). The author should provide more comparisons between their method and previous works on certified defense methods against adversarial transformation.
2. Section 5.3 should be simplified.

__ Correctness__: The claims are correct.

__ Clarity__: The author should improve writing.

__ Relation to Prior Work__: It is clearly discussed

__ Reproducibility__: Yes

__ Additional Feedback__: 1. One merit of this method is the scalability on large-scale datasets. Does this come from the use of random smoothing?
2. From the paper, it seems that the current method can be only applied to the case with one specific adversarial transformation? Can this method be extended to the context where multiple transformations exist?

__ Summary and Contributions__: In this work, the authors extend the probabilistic robustness certification argument of randomized smoothing (RS) to a few different domains. The most interesting, in my opinion, being that of parameterized transformations. Doing so comes with some technicalities in terms of rounding that the authors explain clearly and deal with. They also perform a similar argument to this in order to get a bound on the error arising from any point in the distribution.

__ Strengths__: The paper extends the randomized smoothing argument to a parameterized function that can handle such transformations as rotation and translation. Though other methods give guarantees for these transformations (in the form of linear propagation or other over approximations), this paper is, to my knowledge, the first paper to extend the randomized smoothing argument in this way. Given that it is always valuable to have an arsenal of models to properly capture the invariances of deep neural networks I think this work has potential to be impactful.

__ Weaknesses__: I am quite keen to see how figure 2 looks when applied to context-rich RGB images such as CIFAR and ImageNet. I took a quick glance at the appendix and did not find any examples there.
The authors provide a similar argument to SPT but for the data distribution, however, this guarantee must hold over the entire, unknown data distribution and so it seems quite laborious to compute. However, to their credit, I still think that such a measure is informative even if the authors are unable to compute it with rigorous and decent statistical guarantees in practice.

__ Correctness__: The empirical methodology is sound; however, I was unable to check the proofs of this paper entirely which I will do before the final decision.

__ Clarity__: The paper is very well written and I appreciate the way the authors break down the way that they deal with the inverse computation and rounding issues.

__ Relation to Prior Work__: The authors clearly survey the literature, and to the best of my knowledge cite the most prominent and recent papers in certifying geometric transformations.

__ Reproducibility__: Yes

__ Additional Feedback__: I would like to thank the authors for responding to my query about visualizing some of their perturbations. I think the paper makes an interesting and novel contribution and have given it another couple reads in the mean time. I am convinced at the novelty and position of the work, yet I was still unable to find enough time to check the full details of the mathematical derivation for correctness; however, I am increasing my confidence score on the basis of fully understanding this paper's position in the literature.