• 1
  • 1
  • 2
  • 6
  • 5
  • 6
  • 3
  • 4
IPM
30
YEARS OLD

“School of Cognitive Sciences”

Back to Papers Home
Back to Papers of School of Cognitive Sciences

Paper   IPM / Cognitive Sciences / 7513
School of Cognitive Sciences
  Title:   Cooperative Q-Learning in a Team of Specialized Agents
  Author(s): 
1.  S. Mastour Eshgh
2.  B. Nadjar Araabi
3.  M. Nili Ahmadabadi
  Status:   In Proceedings
  Proceeding: IROS-03 Workshop on Learning and Evolution in Multi agent Systems
  Year:  2003
  Supported by:  IPM
  Abstract:
In distributed AI several agents cooperate to achieve a common goal or accomplish a shared task. On the other hand, learning is the essential part of intelligent agents Through learning an agent changes its behavior based on its previous experiences. Cooperative learning can be realized in a multi agent system, if agents are capable of learning form both their own experiments and other agents� knowledge and expertise. Because of having more knowledge and information acquisition resources, cooperative learning results in higher efficiency and faster learning compared to that of individual learning. In the real world; however, implementation of cooperative learning is not a straightforward task, due to possible differences in areas of expertise. In this paper, agents are considered in an environment with multiple goals or tasks. As a result, they can become expert in different domains with different amounts of expertness. By emphasizing on differences in areas of expertise in cooperative learning, essentially we would like to know �what kind of knowledge from which agent can be used to better improve the overall performance?� In this study, we focus on cooperative reinforcement learning in a communicating homogenous multi-agent system, where each agent uses the One step Q learning algorithm. Different methods are introduced for cooperative learning when agents have different area of expertise. Two crucial questions are addressed in this paper: �How the area of expertise of an agent can be extracted?� and, �How the agents can improve their performance in cooperative learning by knowing their areas of expertise?� An algorithm is developed to extract the area of expertise based on state transitions. Three new methods for cooperative learning through combination of Q tables are developed, and examined for overall performance after cooperation. The performance of developed methods are compared with that of individual learning, strategy sharing, and weighted strategy sharing as well. Obtained results show the superior performance of area of expertise based methods as compared with that of existing cooperation methods, which do not use the notion of area of expertise. This results are very encouraging, in support of the idea that �cooperation based on the area of expertise� performs better than the general cooperative learning methods.


Download TeX format
back to top
Clients Logo
Clients Logo
Clients Logo
Clients Logo
Clients Logo
Clients Logo
Clients Logo
Clients Logo
scroll left or right