AAMAS 2006 Workshop on

Adaptation and Learning in Autonomous Agents and Multiagent Systems


To be Held
As a workshop at the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2006). The workshop will be held at Future University in Hakodate (Japan) on the 8th of May 2006, prior to the main technical program of the AAMAS conference.

About the Workshop
Machine learning and adaptive systems have been traditionally concerned with learning and adapting from past experience in the environment. Whereas most of this research has focused on techniques for acquisition and effective use of problem solving knowledge from the viewpoint of a single autonomous agent, recent work has also opened the possibility of applying some of these techniques in multiagent settings.
The goal of this workshop is to increase awareness and interest in adaptive agent research, encourage collaboration between ML experts and agent system experts, and give a representative overview of current research in the area of adaptive agents. The workshop will serve as an inclusive forum for the discussion on ongoing or completed work in both theoretical and practical issues. Recognizing the applicability and limitations of current machine learning research when applied to situated agents, as well as the necessary extensions of these methods to deal with the multi in multiagent systems, will be of particular relevance to this workshop.

Tentative Schedule
  9:00 -   9:30   Learning Agents for Distributed and Robust Spacecraft Power Management
                            (by Stephane Airiau, Kagan Tumer, and Adrian Agogino)
  9:30 - 10:00   Reward Design for Emerging Cooperative Behavior in Continuous Task Domains
                            (by Nobuyuki Tanaka, Sachiyo Arai, and Seiichi Koakutsu)
10:00 - 10:30   Improving Individual Learning Capabilities in Multi Agent Systems
                            (by Eloi Puertas and Eva Armengol)

10:30 - 11:00   Coffee Break

11:00 - 11:30   Scalable Potential-Field Multi-Agent Coordination in Resource Distribution Tasks
                            (by Steven de Jong and Karl Tuyls)
11:30 - 12:00   A Reinforcement Learning Approach to Tactical Simulation in Multi-Agent Systems
                            (by Aydano Machado, Yann Chevaleyre, Jean-Daniel Zucker, and Geber Ramalho)
12:00 - 12:30   Personalized Text Categorization Using a MultiAgent Architecture
                            (by Andrea Addis, Giuliano Armano, Giancarlo Cherchi and Eloisa Vargiu)

12:30 - 14:00   Lunch Break

14:00 - 15:00   Invited Speaker: Dr. Enric Plaza
15:00 - 15:20   Convergence to Pareto Optimality in General Sum Games via Learning Opponent's Preference
                            (by Dipyaman Banerjee and Sandip Sen)
15:20 - 15:40   Learning to Commit in Repeated Games
                            (by Stephane Airiau and Sandip Sen)

15:40 - 16:00   Coffee Break

16:00 - 16:30   Learning Coaching Advice to Improve Playing Skills in RoboCup
                            (by Eva Bou, Enric Plaza, and Juan A. Rodriguez-Aguilar)
16:30 - 17:00   Context Detection: Deadling with Non-stationarity in Reinforcement Learning
                            (by Bruno C. da Silva, Eduardo W. Basso, Ana L.C. Bazzan, and Paulo M. Engel)
17:00 - 17:30   Self-Organization of Agents to Solve Machine Sequencing Problems
                            (by Paulo R. Ferreira Jr. and Ana L.C. Bazzan)

Organizing Committee
Liviu Panait
        Department of Computer Science, George Mason University (USA)
        Email:
Sandip Sen
        Department of Mathematical & Computer Sciences, The University of Tulsa (USA)
        Email:
Eduardo Alonso
        Department of Computing, School of Informatics, City University (UK)
        Email:

Program Committee
Karl Tuyls
Sean Luke
Edwin de Jong
Jeff Rosenschein
Michael Rovatsos
Enric Plaza
Ann Nowe
Pieter Jan 't Hoen
Kagan Tumer
Amy Greenwald

Invited Reviewers
Shivaram Kalyanakrishnan
Keith Sullivan
Gabriel Catalin Balan
Stephane Airiau