NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:4016
Title:Deep Scale-spaces: Equivariance Over Scale


		
This paper has resulted in a detailed discussion between the reviewers. The authors use the same techniques that have recently proved to be very popular for building neural networks that are equivariant to translations, rotations etc. to formulate a general theory of scale equivariant neural networks. I believe that the mathematical derivations are sound and that this is an interesting direction to pursue. However, at the end of the day the proposed algorithm is simple and similar to other "multi-scale" neural nets that have recently appeared in the literature. This makes one feel that the mathematical derivations are a little too meticulous, and instead the authors should have focused on conveying the intuition and providing more extensive and more convincing experimental results. The lack of the latter is particularly concerning given that comparable results published in the Vision literature are much stronger (by the authors admission they did not have the time/resources to perform similar experiments). On the whole, I felt that this paper is very borderline. In the camera ready version, I would strongly encourage the authors to update the paper by bringing it more inline with what has already been published in Vision, as explained by Reviewer 2. That would greatly improve its potential impact.