%0 Journal Article %T An Improved Reinforcement Learning System Using Affective Factors %A Takashi Kuremoto %A Tetsuya Tsurusaki %A Kunikazu Kobayashi %A Shingo Mabu %A Masanao Obayashi %J Robotics %D 2013 %I MDPI AG %R 10.3390/robotics2030149 %X As a powerful and intelligent machine learning method, reinforcement learning (RL) has been widely used in many fields such as game theory, adaptive control, multi-agent system, nonlinear forecasting, and so on. The main contribution of this technique is its exploration and exploitation approaches to find the optimal solution or semi-optimal solution of goal-directed problems. However, when RL is applied to multi-agent systems (MASs), problems such as ¡°curse of dimension¡±, ¡°perceptual aliasing problem¡±, and uncertainty of the environment constitute high hurdles to RL. Meanwhile, although RL is inspired by behavioral psychology and reward/punishment from the environment is used, higher mental factors such as affects, emotions, and motivations are rarely adopted in the learning procedure of RL. In this paper, to challenge agents learning in MASs, we propose a computational motivation function, which adopts two principle affective factors ¡°Arousal¡± and ¡°Pleasure¡± of Russell¡¯s circumplex model of affects, to improve the learning performance of a conventional RL algorithm named Q-learning (QL). Compared with the conventional QL, computer simulations of pursuit problems with static and dynamic preys were carried out, and the results showed that the proposed method results in agents having a faster and more stable learning performance. %K multi-agent system (MAS) %K computational motivation function %K circumplex model of affect %K pursuit problem %K reinforcement learning (RL) %U http://www.mdpi.com/2218-6581/2/3/149