Patrick F. Riley's Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

Coaching: Learning and Using Environment and Agent Models for Advice

Patrick Riley. Coaching: Learning and Using Environment and Agent Models for Advice. Ph.D. Thesis, Computer Science Dept., Carnegie Mellon University, 2005. CMU-CS-05-100

Download

[PDF] [gzipped postscript] 

Abstract

Coaching is a relationship where one agent provides advice to another about how to act. This thesis explores a range of problems faced by an automated coach agent in providing advice to one or more automated advice-receiving agents. The coach's job is to help the agents perform as well as possible in their environment. We identify and address a set of technical challenges: How can the coach learn and use models of the environment? How should advice be adapted to the peculiarities of the advice receivers? How can opponents be modeled, and how can those models be used? How should advice be represented to be effectively used by a team? This thesis serves both to define the coaching problem and explore solutions to the challenges posed. This thesis is inspired by a simulated robot soccer environment with a coach agent who can provide advice to a team in a standard language. This author developed, in collaboration with others, this coach environment and standard language as the thesis progressed. The experiments in this thesis represent the largest known empirical study in the simulated robot soccer environment. A predator-prey domain and and a moving maze environment are used for additional experimentation. All algorithms are implemented in at least one of these environments and empirical validation is performed. In addition to the coach problem formulation and decompositions, the thesis makes several main technical contributions: (i) Several opponent model representations with associated learning algorithms, whose effectiveness in the robot soccer domain is demonstrated. (ii) A study of the effects and need for coach learning under various limitations of the advice receiver and communication bandwidth. (iii) The Multi-Agent Simple Temporal Network, a multi-agent plan representation which is refinement of a Simple Temporal Network, with an associated distributed plan execution algorithm. (iv) Algorithms for learning an abstract Markov Decision Process from external observations, a given state abstraction, and partial abstract action templates. The use of the learned MDP for advice is explored in various scenarios.

BibTeX

@PhdThesis{riley05:thesis,
  author =       {Patrick Riley},
  title =        {Coaching: Learning and Using Environment and Agent
                  Models for Advice},
  school =       {Computer Science Dept., Carnegie Mellon University},
  year =         2005,
  note =         {CMU-CS-05-100},
  bib2html_dl_pdf = {http://www.cs.cmu.edu/~pfr/thesis/pfr_thesis.pdf},
  bib2html_dl_psgz = {http://www.cs.cmu.edu/~pfr/thesis/pfr_thesis.ps.gz},
  abstract =     {Coaching is a relationship where one agent provides
                  advice to another about how to act. This thesis
                  explores a range of problems faced by an automated
                  coach agent in providing advice to one or more
                  automated advice-receiving agents. The coach's job
                  is to help the agents perform as well as possible in
                  their environment. We identify and address a set of
                  technical challenges: How can the coach learn and
                  use models of the environment? How should advice be
                  adapted to the peculiarities of the advice
                  receivers? How can opponents be modeled, and how can
                  those models be used? How should advice be
                  represented to be effectively used by a team? This
                  thesis serves both to define the coaching problem
                  and explore solutions to the challenges posed. This
                  thesis is inspired by a simulated robot soccer
                  environment with a coach agent who can provide
                  advice to a team in a standard language. This author
                  developed, in collaboration with others, this coach
                  environment and standard language as the thesis
                  progressed. The experiments in this thesis represent
                  the largest known empirical study in the simulated
                  robot soccer environment. A predator-prey domain and
                  and a moving maze environment are used for
                  additional experimentation. All algorithms are
                  implemented in at least one of these environments
                  and empirical validation is performed. In addition
                  to the coach problem formulation and decompositions,
                  the thesis makes several main technical
                  contributions: (i) Several opponent model
                  representations with associated learning algorithms,
                  whose effectiveness in the robot soccer domain is
                  demonstrated. (ii) A study of the effects and need
                  for coach learning under various limitations of the
                  advice receiver and communication bandwidth. (iii)
                  The Multi-Agent Simple Temporal Network, a
                  multi-agent plan representation which is refinement
                  of a Simple Temporal Network, with an associated
                  distributed plan execution algorithm. (iv)
                  Algorithms for learning an abstract Markov Decision
                  Process from external observations, a given state
                  abstraction, and partial abstract action
                  templates. The use of the learned MDP for advice is
                  explored in various scenarios. }
}

Generated by bib2html.pl (written by Patrick Riley ) on Thu Mar 31, 2005 16:21:00