Prof. Emo Welzl and Prof. Bernd Gärtner
|Mittagsseminar Talk Information|
Date and Time: Thursday, May 30, 2013, 12:15 pm
Duration: 30 minutes
Location: CAB G51
Speaker: Sebastian Stich
A multi-armed bandit problem is a sequential resource allocation problem defined by a set of actions. In every round, a unit resource is allocated to one action and some observable payoff is obtained. The goal of the player is to maximize her total payoff obtained in a sequence of rounds. In order to achieve this goal, the player must find the optimal trade-off between playing actions that did well in the past and exploring unknown actions that might give higher payoffs in the future.
In this talk, we will focus on the stochastic version of this problem where the actions are given by a set of probability distributions. Auer, Cesa-Bianchi, Fischer (2002) presented an elegant algorithm (UCB) that tackles the exploration/exploitation trade-off by estimating probabilistic upper bounds on the future performance of each action. These upper confidence bounds (UCB) follow from Chernoff's inequality.
If time permits, we will also discuss some variations and applications of this problem.
Automatic MiSe System Software Version 1.4803M | admin login