NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:4172
Title:The Parameterized Complexity of Cascading Portfolio Scheduling

Reviewer 1


		
Post-rebuttal: I am satisfied with the response. I hope the authors can include the QBF data in the final version of this paper. -------------------------- ORIGINALITY ============ The work's main originality is in introducing the study of CPS as a parameterized computational problem. The techniques used to establish fixed-parameter tractability or intractability are fairly standard in the area and cannot be considered new contributions in my opinion. QUALITY & CLARITY ================= The paper systematically analyzes the following parameterizations: * number of algorithms * number of tests * length of the optimal schedule * algorithm and test failure degree * algorithm and test success degree * failure cover number, and more generally, treewidth of failure graph * success cover number The proof sketches in the main submission, and the full proofs in the supplement, are very clear and well-written. The proofs follow standard techniques and are all correct (to the best of my judgement). Where the paper severely lacks, especially as a submission to NeurIPS, is in justifying that any of the above parameterizations are relevant to applications in machine learning and artificial intelligence. Can we expect the failure graph in QBF solving or ImageNet classification models to have a small treewidth or have a small success cover number? SIGNIFICANCE ============ I have mixed feelings about this submission. On the one hand, CPS captures a basic problem in software engineering, and the work establishes some fundamental results about the complexity of the problem. On the other hand, it is going to be of limited interest to the NeurIPS community because it's not clear how the parameterizations relate to actual instances. Also, the submission would be significantly strengthened if the paper looked at the hardness of the specific scenarios 1, 2 and 3.

Reviewer 2


		
Originality: As far as I can tell, the paper uses fairly standard arguments to achieve its theoretical results. However, I would argue that the simplicity here is a virtue, not a fault. In a sense, the paper is a showcase of the powerful FPT toolkit being applied to the cascading portfolio scheduling question. Quality: The paper thoroughly considers many possible parameterizations of a very general problem (see above). This problem captures many plausible scenarios of how one might want to try running a sequence of algorithms, and encompasses algorithms with unreliable running time or output. Despite the generality of the problem, the paper still manages to establish positive results for a number of cases. I think a reasonable case for acceptance could be made even without the last set of results (failure/success cover number); differentiating between the tractable parameters and intractable ones of (i) and (ii) is already an interesting contribution. Clarity: The paper is well-organized, presenting the parameters in the natural (simple to complex) order. It might be helpful to add a chart to the introduction, indicating the map from parameter to tractable/intractable. Figure 1 was helpful in illustrating the parameters based on covering number, thanks. As one minor nit, the paper said that it would use a spade to mark statements with omitted proofs, but they were not actually marked with spades. Significance: This is a nice foundation to build future work on top of. I think the biggest weakness of this paper is addressing whether these parameters are actually small in any of the original motivators for the problem, i.e. are these algorithms useful for the original questions that lead us here. Leaving this question unanswered is why I would argue that this paper is just an accept instead of a strong accept. Contrast these results with standard treewidth results, where we know that many graphs of interest (e.g. road networks, social networks) have small treewidth, making treewidth a meaningful characterization, not just a characterization that we can prove results for. Is there a good reason to expect failure treewidth to be small? Rebuttal: I've read your rebuttal. Sounds good, I hope your QBF data works out!

Reviewer 3


		
[Originality] The authors present a parameterized complexity analysis of CPS, which seems to be motivated by the recent success on Matrix Completion. The results systematically explain the tractability and the hardness when CPS is parameterized by # algorithms, # tests, length, algorithm failure/success degree, test failure/success degree, failure/success cover number, which are extensive and clearly original. [Quality] * This paper demonstrates not just positive side (i.e., fixed-parameter tractability) but also negative side (e.g., W[2]-hardness), which strongly reveals when (and why) the exact solution of CPS is hard or easy to find, while the intuitive statement that the success relation governs the computational complexity is convincing. Some of the proof techniques are technically complicated as far as I have read in the main paper though based on fundamental paradigm (such as DP). [Clarity] * I was confused with the proof of Theorem 7: in the deformation from Line 337 to Line 338 in page 7, the authors removed the "\log(n+m)" on the "4" in the runtime; however, the factor 4^{(2tw+5) log(n+m)} seems to be poly(n+m)^(2tw+5) for some polynomial function and it is not FPT (but XP) if so. Please clarify the deformation from Line 337 to Line 338. * Also, including the concrete deformation from O^*((log(N))^k) to O(f(k)+n) for some k in Line 339 (instead of just referring to [19]) makes the paper more self-contained. [Significance] Overall, I thought this is a good paper devoted to the systematic complexity analysis for FPT of the CPS problem, which is motivated by applications. [Minor issues] In proof of Proposition 1, $t$ is used to denote test and destination of the DAG (e.g., Line 155 and Line 158), which should be revised. ==== UPDATE ==== Thank you for providing the feedback. The modification to DP tables makes sense and the fixed-parameter tractability of CPS[failure treewidth] sounds correct.