next up previous
Next: Heterogeneous Communicating Multiagent Systems Up: Heterogeneous Non-Communicating Multiagent Systems Previous: Roles

Further Learning Opportunities

Throughout the above investigation of issues and techniques in the heterogeneous non-communicating multiagent scenario, many learning approaches are described. A few of the other most obvious future ML applications to this scenario are described here and summarized in Table 6.

One challenge for system builders who use evolving agents is dealing with the credit/blame problem. When several different agents are evolving at the same time, changes in an agent's fitness could be due to its own behavior or due to the behavior of others. Yet if agents are to evolve effectively, they must have a reasonable idea of whether a given change in behavior is beneficial or detrimental. Methods of objective fitness measurement are also needed for testing various evolution techniques. In competitive (especially zero-sum) situations, it is difficult to provide adequate performance measurements over time. Even if all agents improve drastically, if they all improve the same amount, the actual results could remain the same. One possible way around this problem is to test agents against past agents in order to measure improvement. However this solution is not ideal: the current agent may have adapted to the current opponent rather than past opponents. A reliable measurement method would be a valuable contribution to ML in MAS.

In cooperative situations, agents ideally learn to behave in such a way that they can help each other. Unfortunately, most existing ML techniques focus on exploring behaviors that are likely to help an agent with its own ``personal'' deficiencies. An interesting contribution would be a method for introducing into the learning space a bias towards behaviors that are likely to blend well with the behaviors of other agents.

Many of the techniques described in this section pertained to modeling other agents in the heterogeneous non-communicating scenario. However the true end is not just knowledge of another agent's current situation, but rather the ability to predict its future actions. For example, the reason it is useful to deduce another mobile robot's goal location is that its path to the goal may then be predicted and collision avoided. There is still much room for improvement of existing techniques and for new techniques that allow agents to predict each other's future actions.

In the context of teams of agents, it has been mentioned that agents might be suited to different roles in different situations. In a dynamic environment, these flexible agents are more effective if they can switch roles dynamically. For example, if an agent finds itself in a position to easily perform a useful action that is not usually considered a part of its current role, it may switch roles and leave its old role available for another agent. A challenging possible approach to this problem is to enable the agents to learn which roles they should assume in what situations. Dynamic role assumption is a particularly good opportunity for ML researchers in MAS.



next up previous
Next: Heterogeneous Communicating Multiagent Systems Up: Heterogeneous Non-Communicating Multiagent Systems Previous: Roles



Peter Stone
Wed Sep 24 11:54:14 EDT 1997