next up previous
Next: Issues for a Up: Planning for contingencies: Previous: Planning for contingencies:

Introduction

Many plans that we use in our everyday lives specify ways of coping with various problems that might arise during their execution. In other words, they incorporate contingency plans. The contingencies involved in a plan are often made explicit when the plan is communicated to another agent, e.g., ``try taking Western Avenue, but if it's blocked use Ashland,'' or ``crank the lawnmower once or twice, and if it still doesn't start jiggle the spark plug.'' So-called classical planners gif cannot construct plans of this sort, due primarily to their reliance on three perfect knowledge assumptions:

  1. The planner has full knowledge of the initial conditions in which the plan will be executed, e.g., whether Western Avenue will be blocked;
  2. All actions have fully predictable outcomes, e.g., cranking the lawnmower will definitely either work or not work;
  3. All change in the world occurs through actions performed by the planner, e.g., nobody else will use the car and empty its gas tank.
Under these assumptions the world is totally predictable; there is no need for contingency plans.

The perfect knowledge assumptions are an idealization of the planning context that is intended to simplify the planning process. They allow the development of planning algorithms that have provable properties such as completeness and correctness. Unfortunately, there are few domains in which they are realistic: mostly, the world is to some extent unpredictable. Relying on the perfect knowledge assumptions in an unpredictable world may prove cost-effective if the planner's uncertainty about the domain is small, or if the cost of recovering from a failure is low. In general, however, they may lead the planner to forgo options that would have been available had potential problems been anticipated in advance. For example, on the assumption that the weather will be sunny, as forecast, you may neglect to take along an umbrella; if the forecast later turns out to be erroneous, it is then impossible to use the umbrella to stay dry. When the cost of recovering from failure is high, failing to prepare for possible problems in advance can be an expensive mistake. In order to avoid mistakes of this sort, an autonomous agent in a complex domain must be able to make and execute contingency plans.

Recently, we and a number of other researchers have begun investigating the possibility of relaxing the perfect knowledge assumptions while staying close to the framework of classical planning [Etzioni, Hanks, Weld, Draper, Lesh and Williamson 1992, Peot and Smith 1992, Pryor and Collins 1993 , Draper, Hanks and Weld 1994a, Goldman and Boddy 1994a]. Our work is embodied in Cassandra, gif a contingency planner whose plans have the following features:





next up previous
Next: Issues for a Up: Planning for contingencies: Previous: Planning for contingencies:



Louise Pryor <louisep@aisb.ed.ac.uk>;
Last modified: Wed May 1 11:31:28 1996