NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:2243
Title:Regularized Weighted Low Rank Approximation

Reviewer 1


		
I found the results of the article useful but difficult to follow without being an expert on previous results on weighted low-rank approximation. Additionally, I think the clarity of the paper could be highly improved. For example, at the end of the Introduction section, they refer to a matrix A which is not defined before, making the reading difficult. Similarly, in line 89, matrices D_{W_{i,:}} and D_{W_{:,j}} are used but defined later in lines 93-94. Also, in line 121 the concept of non-negative rank is used but not defined. Experimental results are rather weak. For example, the particular case of matrix completion is not covered, which consists of using a binary weight matrix. The regularization parameter lambda was set to 1 without a clear justification.

Reviewer 2


		
- Originality: - The authors propose to look at provable guarantees for regularized weighted low rank approximation, while the related works mainly study the weighted low rank approximation problem. Thus, the problem in this paper is new. However, the proposed algorithm and the techniques to analyze its complexity is similar to those in [RSW16], which makes the contribution of this paper incremental. - The related work is adequately cited. - Quality: - The authors provide theoretical analysis of their claims in the supplementary material. - Though the authors claim that the proposed algorithm can be significantly faster than other algorithms, they did not conduct experiments to support the claim. The authors may want to conduct experiments to show how much faster will the proposed algorithm be than the previous algorithm (e.g., SVD). - Clarity: This paper is clearly written and well organized. It provides enough information to reproduce its results. - Significance: In this paper, the authors theoretically show that we can solve the regularized weighted low rank approximation problem with the time complexity bounds in terms of the statistical dimension rather than the rank. In many real applications, the statistical dimension is lower than the rank. The derived complexity bounds ensure that we can solve regularized weighted low rank approximation problems fast. Thus, the result of this paper is important. [RSW16] Ilya 354 P. Razenshteyn, Zhao Song, and David P. Woodruff. Weighted low rank approximations with provable guarantees. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 250–263, 2016.

Reviewer 3


		
Thank you for addressing my review comments. I am somewhat satisfied with the answers. I will increase the score provided that authors will add the suggested experimental comparisons in the rebuttal to the final version of the paper. -------------------------------------------------- This paper discusses improvement for weighted low-rank for matrix approximation. This paper closely follows methodologies by [Razenshteyn et al. ’16]. Overall the paper is well-written, however, though the theoretical analysis is strong enough to show an improvement over [Razenshteyn et al. ’16], the paper does not properly demonstrate the results through experiments. One of the main contributions they claim is the improvement of the running time of matrix approximation (in lines 87-90), however, there is no empirical evidence in the paper ([Razenshteyn et al. ’16] gives experiments on running time). Can the authors demonstrate this speed up by experiments? Experiments: Why do you set the regularization parameter \lambda equal to 1? Why not cross-validate? Any theoretical justification? I feel that the dataset sizes are too small (dimensions of 1000) to demonstrate that the proposed method is useful in real-world applications. Can the authors provide at least one experiment with a large dataset? As I mentioned earlier, it is desirable to see experiments with running times.