NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:2754
Title:SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers

Reviewer 1


		
Originality: It is a novel combination of well-known techniques. It is novel, but not a radically new idea. Quality: The paper is technically sound, the claim are supported by experimental results. One aspect is missing, though: applications with MCUs normally also have throughput constraints. These are frequently seen in NAS papers as a secondary objective beyond accuracy. This is missing here, both as a optimization constraint as well as in the evaluations. Clarity: In general, the paper is clear. Some minor items are missing, though: 1) what is CIFAR10-binary? 2) What microcontroller is used? Significance: In general, the direction is interesting and the paper will provide a new state-of-the-art comparison point. The lack of code and thus reproducibility will have a strongly limiting impact on the significance of this work.

Reviewer 2


		
This paper combines architecture search and pruning to find better CNN for memory-limited MCUs. The biggest problem of deploying CNN on MCUs is memory constraints. This paper address this problem by using multi-objective optimization. The CNNs found by this paper are more accurate and smaller, and can run on MCUs. Originality: The MCU target is new for architecture search. However, the constrained search problem is not new to architecture search researchers. Existing works have already addressed the problems like constrained number of parameters, constrained FLOPs, or constrained latency. The authors have to cite these works and compare with them. Other methods used in this paper are all well-known techniques. Quality: This submission is technically sound. The proposed methods are thoroughly evaluated. The only missing component is a comparison with existing architecture search works. Clarity: This paper is well-written and well-organized with adequate background information. Significance: The MCU is an interesting target for deploying CNNs. This work definitely advances the state of the art for MCUs. However, I am still doubt about the use cases of CNNs on MCUs. Can the latency and power consumption meet the requirements? Questions: 1. What is the latency and throughput performance of the proposed CNN compared to baselines? These are very important for applications. It will be better if the paper can add some experiments with regard to these metrics. Overall, the MCU target is interesting but the novelty of this paper is limited.

Reviewer 3


		
• The quality of writing and explanation of the methodology is well done. • The background info about MCUs in the introduction is helpful in building motivation for the problem, however, it may be useful to sacrifice some of these details for more detail in the methodology sections (specifically sections 3.1, 3.4, 3.5). • The design and description of the "multi-objective optimization" framework is also well done. Specifically, the way the authors encode the objectives in (1)-(3), and the desire for a Pareto optimal configuration is reasonable and seems extendable in future works. However, there are some non-trivial topics in this section that could use some more explanation, including the SpVD and BC pruning methods (S3.3), Thompson sampling (S3.4) to get the next configurations, and the coarse-to-fine search (S3.5). • It seems like morphism is primarily included to minimize search time, however how does the framework work without morphism? Do the resulting configs of a morphism-search and non-morphism-search match? Using morphism, is there a strong bias to the initial configuration? • Line 195, how is f_k( omega ) unknown in practice? You define them in (1)-(3) and they seem directly measurable. • Since SpArSe is a "framework" for CNN design, how would this method scale to larger problems/hardware, e.g. CIFAR-10 and/or ImageNet for mobile phones? Or is the future of this framework somewhat limited to MCU designs? • Multi-objective Bayesian optimization is used as the unsupervised learning algorithm. Although the authors do mention that existing approaches to NAS (i.e. reinforcement learning-based) are much less efficient. However, can these other optimization processes (along with genetic algorithms, etc.) be dropped into the SpArSe framework as well? Or is the MOBO an integral component of the proposed framework?