NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:5592
Title:Dimensionality reduction: theoretical perspective on practical measures

Reviewer 1


		
Originality As far as I can tell, the authors' claim that this is the first such work is correct. Previous work has been done is describing heuristics or empirical understandings of such behaviour, but the work is nonetheless original in proving a theoretical basis for this. Quality The authors' exposition of the problem and the solution is well thought out and expertly laid out in a logical and convincing form. However, the excellent technical contribution is somewhat lacking in discussion, particularly given the authors aim to bridge the gap between theory and practice ; such claims as "This new consequence may serve an important guide for practical considerations" warrant a standalone discussion section which is not provided. Further, the results predicted in theory could have been compared to empirical experiments to show tightness in practice, and phase transitions could be shown in experiments as a demonstration. Clarity The work is well written and the notations are easily understood. There are a moderate number of typographical errors, these are listed at the end of this review. Significance The authors correctly identify that metric dimensionality reduction is a crucial part of most modern machine learning pipelines. Better theoretical understanding of the average case performance is a highly significant contribution to a vast array of applications. Further, the authors' framework for collapsing existing distortion measures to just two generalized forms will facilitate further theoretical analysis of this ensemble of measures. Detailed comments : Intro Laundry list of use cases : maybe better separating citations by subfield so these are more useful to the reader Background Page 3, reference to Hriser (998a) should be (1998a) Energy+REM definition should be display math as it takes up a whole line anyway Section 2 "in what follows we will omit pi from the notation" - there is an extra space after the paranthesis. Really, this should be a separate sentence and not a paranthesis. Section 3 Footnote citation should be citep not citet Section 4 JL citation should be citep not citet Missing space after comma at Rademacher entries matrix -------------------------------- Update after author response -------------------------------- The authors' rebuttal was succinct and clear and provided ample empirical results supporting the theory presented in the original submission. These empirical results significantly strengthen the manuscript and I argue for acceptance as a result.

Reviewer 2


		
This paper reads well with clear math notations and has sufficient originality. The paper provides the first comprehensive analysis of metric dimensionality reduction under various distortion measurement criteria and fills the gap between the theoretical perspective and practical viewpoint on metric dimensionality reduction. The paper provides a clean line of related works and highlights the contributions of the paper in a reasonable and fair way. This paper as it states might provide useful practical guidance on metric dimensionality reduction.

Reviewer 3


		
The authors benchmark various average case distortion criteria. This task is valuable in itself, as dimensionality reduction plays a central role in many key applications of machine learning. However, the authors do not demonstrate enough evidence to support this contribution. For example, on pg 5. the authors state "is is easy to see all (adapted versions of) distortion criteria discussed obey all properties." The reader did not find it easy to see, and after re-reading the previous 4 pages did not find either the adaptation in question well specified or support for the claim that the criteria are fulfilled. If this support is in the appendix, the relevant sections should be well specified for the reader. This paper could benefit greatly from having a more clearly delineated structure. The flow of the paper is obtuse, due in part to having unintuitive names for sections and subsections. Referencing sections that do not exist (on page 3) combined with the lack of a conclusion, makes this work feel like one that is in progress rather than complete. It is also hard to decouple at times what is the authors contribution vs previous work. For example in section 2), the motivated properties are largely the same as that proposed by Chennuru Vankadara and von Luxburg (2018). It is unclear exactly what has been generalized. In addition, one of the properties defined by Chennuru Vankadara and von Luxburg (2018) "translation invariance" is not discussed here, although the authors imply they benchmark against all the properties listed. The authors do not benchmark their proposed new method (the square root of the variance distortion proposed by Chennuru Vankadara and von Luxburg (2018)) using a simulation framework that would help evidence the utility of their contribution. The lack of empirical simulation to support the theoretical discussion makes placing the value of this work within the literature difficult. Overall, the poor structure of the writing makes the overall contribution of the authors hard to discern. This may be a symptom of this being a work in progress, and not being able to appropriately clarify or support claims to the reader.