“School of Cognitive”

Back to Papers Home
Back to Papers of School of Cognitive

Paper   IPM / Cognitive / 13831
School of Cognitive Sciences
  Title:   Context transfer in reinforcement learning using Action-Value functions
  Author(s): 
1.  A. Mousavi
2.  B. Araabi
3.  M. Niliahmadabadi
  Status:   Published
  Journal: Computational Intelligence and Neuroscience
  Vol.:  428567
  Year:  2014
  Pages:   1-10
  Supported by:  IPM
  Abstract:
This paper discusses the notion of context transfer in reinforcement learning tasks. Context transfer, as defined in this paper, implies knowledge transfer between source and target tasks that share the same environment dynamics and reward function but have different states or action spaces. In other words, the agents learn the same task while using different sensors and actuators. This requires the existence of an underlying common Markov decision process (MDP) to which all the agents’ MDPs can be mapped. This is formulated in terms of the notion ofMDP homomorphism.The learning framework is𝑄-learning. To transfer the knowledge between these tasks, the feature space is used as a translator and is expressed as a partialmapping between the state-action spaces of different tasks.The𝑄-values learned during the learning process of the source tasks are mapped to the sets of𝑄-values for the target task.These transferred 𝑄-values are merged together and used to initialize the learning process of the target task. An interval-based approach is used to represent and merge the knowledge of the source tasks. Empirical results show that the transferred initialization can be beneficial to the learning process of the target task.

Download TeX format
back to top
scroll left or right