NeurIPS 2020

SuperLoss: A Generic Loss for Robust Curriculum Learning


Meta Review

All reviewers mention that despite the idea being incremental, it is simple and yet effective. The empirical evaluation is made on a number of problems and in that sense is quite thorough. However, at the same time some reviewers have raised concerns about the experimental setup (R3, R4). R3 has two major concerns: limited novelty and improper empirical validation. R3 says, "... results in the same trend of down-weighting hard samples but nothing more. So the potential contribution is very limited". In terms of results, ".. On CIFAR100, the accuracy of SuperLoss quickly degrades below several baselines as the noisy level increases ..". Given that proposed method applies to many different problems (as shown by experiments), I am not concerned about novelty. In their rebuttal, authors acknowledge that their method works is at par-in noisy scenarios with baselines, however as opposed to previous methods that are specific to one particular loss function, their method can be applied to many different loss-functions. This is reasonable. R4's main concern was that statistical-significance of results is not high as no evaluation is made on Imagenet. Authors do provide results on Object Detection on PASCAL VOC, so I am not concerned about this. In my opinion, authors have addressed R4's main concern and mostly addressed R3. At the same time, I would encourage authors to report a thorough comparison with [47] in the main paper for the camera ready version. With this, I believe this is a fine piece of work to be presented to the NeurIPS community.