next up previous
Next: Randomized Strategies Up: Ad-Hoc Techniques Previous: Ad-Hoc Techniques

Greedy Strategies

The first strategy that comes to mind is to always choose the action with the highest estimated payoff. The flaw is that early unlucky sampling might indicate that the best action's reward is less than the reward obtained from a suboptimal action. The suboptimal action will always be picked, leaving the true optimal action starved of data and its superiority never discovered. An agent must explore to ameliorate this outcome.

A useful heuristic is optimism in the face of uncertainty in which actions are selected greedily, but strongly optimistic prior beliefs are put on their payoffs so that strong negative evidence is needed to eliminate an action from consideration. This still has a measurable danger of starving an optimal but unlucky action, but the risk of this can be made arbitrarily small. Techniques like this have been used in several reinforcement learning algorithms including the interval exploration method [52] (described shortly), the exploration bonus in Dyna [116], curiosity-driven exploration [102], and the exploration mechanism in prioritized sweeping [83].



Leslie Pack Kaelbling
Wed May 1 13:19:13 EDT 1996