Gaussian Processes in Reinforcement Learning

Part of Advances in Neural Information Processing Systems 16 (NIPS 2003)

Bibtex Metadata Paper

Authors

Malte Kuss, Carl Rasmussen

Abstract

We exploit some useful properties of Gaussian process (GP) regression models for reinforcement learning in continuous state spaces and dis- crete time. We demonstrate how the GP model allows evaluation of the value function in closed form. The resulting policy iteration algorithm is demonstrated on a simple problem with a two dimensional state space. Further, we speculate that the intrinsic ability of GP models to charac- terise distributions of functions would allow the method to capture entire distributions over future values instead of merely their expectation, which has traditionally been the focus of much of reinforcement learning.