We describe a new approximation algorithm for solving partially observ- able MDPs. Our bounded policy iteration approach searches through the space of bounded-size, stochastic finite state controllers, combining sev- eral advantages of gradient ascent (efficiency, search through restricted controller space) and policy iteration (less vulnerability to local optima).