Gradient Descent for General Reinforcement Learning

Part of Advances in Neural Information Processing Systems 11 (NIPS 1998)

Bibtex Metadata Paper

Authors

Leemon Baird, Andrew Moore

Abstract

A simple learning rule is derived, the VAPS algorithm, which can be instantiated to generate a wide range of new reinforcement(cid:173) learning algorithms. These algorithms solve a number of open problems, define several new approaches to reinforcement learning, and unify different approaches to reinforcement learning under a single theory. These algorithms all have guaranteed convergence, and include modifications of several existing algorithms that were known to fail to converge on simple MOPs. These include Q(cid:173) In addition to these learning, SARSA, and advantage learning. it also generates pure policy-search value-based algorithms reinforcement-learning algorithms, which learn optimal policies without learning a value function. search and value-based algorithms to be combined, thus unifying two very different approaches to reinforcement learning into a single Value and Policy Search (V APS) algorithm. And these algorithms converge for POMDPs without requiring a proper belief state . Simulations results are given, and several areas for future research are discussed.

In addition, it allows policy(cid:173)

1 CONVERGENCE OF GREEDY EXPLORATION

Many reinforcement-learning algorithms are known that use a parameterized function approximator to represent a value function, and adjust the weights include Q-learning, SARSA, and incrementally during advantage learning. There are simple MOPs where the original form of these algorithms fails to converge, as summarized in Table 1. For the cases with..J, the algorithms are guaranteed to converge under reasonable assumptions such as