Part of Advances in Neural Information Processing Systems 4 (NIPS 1991)
Andrew Moore
A large class of motor control tasks requires that on each cycle the con(cid:173) troller is told its current state and must choose an action to achieve a specified, state-dependent, goal behaviour. This paper argues that the optimization of learning rate, the number of experimental control deci(cid:173) sions before adequate performance is obtained, and robustness is of prime importance-if necessary at the expense of computation per control cy(cid:173) cle and memory requirement. This is motivated by the observation that a robot which requires two thousand learning steps to achieve adequate performance, or a robot which occasionally gets stuck while learning, will always be undesirable, whereas moderate computational expense can be accommodated by increasingly powerful computer hardware. It is not un(cid:173) reasonable to assume the existence of inexpensive 100 Mflop controllers within a few years and so even processes with control cycles in the low tens of milliseconds will have millions of machine instructions in which to make their decisions. This paper outlines a learning control scheme which aims to make effective use of such computational power.
1 MEMORY BASED LEARNING Memory-based learning is an approach applicable to both classification and func(cid:173) tion learning in which all experiences presented to the learning box are explic(cid:173) itly remembered. The memory, Mem, is a set of input-output pairs, Mem = {(Xl, YI), (X21 Y2), ... , (Xb Yk)}. When a prediction is required of the output of a novel input Xquery, the memory is searched to obtain experiences with inputs close to Xquery. These local neighbours are used to determine a locally consistent output for the query. Three memory-based techniques, Nearest Neighbour, Kernel Regression, and Local Weighted Regression, are shown in the accompanying figure.