Learning Saccadic Eye Movements Using Multiscale Spatial Filters

Part of Advances in Neural Information Processing Systems 7 (NIPS 1994)

Bibtex Metadata Paper

Authors

Rajesh Rao, Dana Ballard

Abstract

We describe a framework for learning saccadic eye movements using a photometric representation of target points in natural scenes. The rep(cid:173) resentation takes the form of a high-dimensional vector comprised of the responses of spatial filters at different orientations and scales. We first demonstrate the use of this response vector in the task of locating pre(cid:173) viously foveated points in a scene and subsequently use this property in a multisaccade strategy to derive an adaptive motor map for delivering accurate saccades.