Prof. Emo Welzl and Prof. Bernd Gärtner
|Mittagsseminar Talk Information|
Date and Time: Tuesday, July 16, 2013, 12:15 pm
Duration: 30 minutes
Location: CAB G51
Speaker: Hemant Tyagi
In the multi-armed bandit problem, an online algorithm must choose from a given set of strategies S in a sequence of n trials in order to maximize the total cumulative reward. The reward functions r_t: S -> R can change over time and the aim of the algorithm is to minimize the "regret" at not having played constantly the strategy which yields the highest cumulative reward. In this talk, we will focus on the continuum-armed bandit problem where S is a compact subset of R^d. For d > 1, it is well known that any algorithm will incur a worst case regret of Omega(2^d) provided only classical smoothness assumptions (Hölder continuity,differentiability etc.) are made on the reward functions (curse of dimensionality). We will consider the problem where the reward functions depend on an unknown,fixed subset of k coordinate variables and derive upper bounds on the regret for the same. Joint work with Bernd Gärtner.
Automatic MiSe System Software Version 1.4803M | admin login