Processing math: 100%

Multi-Armed Bandits with Bounded Arm-Memory: Near-Optimal Guarantees for Best-Arm Identification and Regret Minimization

Part of Advances in Neural Information Processing Systems 34 (NeurIPS 2021)

Bibtex Paper Reviews And Public Comment » Supplemental

Authors

Arnab Maiti, Vishakha Patil, Arindam Khan

Abstract

We study the Stochastic Multi-armed Bandit problem under bounded arm-memory. In this setting, the arms arrive in a stream, and the number of arms that can be stored in the memory at any time, is bounded. The decision-maker can only pull arms that are present in the memory. We address the problem from the perspective of two standard objectives: 1) regret minimization, and 2) best-arm identification. For regret minimization, we settle an important open question by showing an almost tight guarantee. We show Ω(T2/3) cumulative regret in expectation for single-pass algorithms for arm-memory size of (n1), where n is the number of arms. For best-arm identification, we provide an (ε,δ)-PAC algorithm with arm memory size of O(logn) and O(nε2log(1δ)) optimal sample complexity.