全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

A Novel Method for Training an Echo State Network with Feedback-Error Learning

DOI: 10.1155/2013/891501

Full-Text   Cite this paper   Add to My Lib

Abstract:

Echo state networks are a relatively new type of recurrent neural networks that have shown great potentials for solving non-linear, temporal problems. The basic idea is to transform the low dimensional temporal input into a higher dimensional state, and then train the output connection weights to make the system output the target information. Because only the output weights are altered, training is typically quick and computationally efficient compared to training of other recurrent neural networks. This paper investigates using an echo state network to learn the inverse kinematics model of a robot simulator with feedback-error-learning. In this scheme teacher forcing is not perfect, and joint constraints on the simulator makes the feedback error inaccurate. A novel training method which is less influenced by the noise in the training data is proposed and compared to the traditional ESN training method. 1. Introduction A recurrent neural network (RNN) is a neural network with feedback connections. Mathematically RNNs implement dynamical systems, and in theory they can approximate arbitrary dynamical systems with arbitrary precision [1]. This makes them “in principle promising” as solutions for difficult temporal tasks, but in practice, supervised training of RNNs is difficult and computationally expensive. Echo state networks (ESNs) were proposed as a cheap and fast architectural and supervised learning scheme and are therefore suggested to be useful in solving real problems [2]. The basic idea is to transform the low dimensional temporal input into a higher dimensional echo state, and then train the output connection weights to make the system output the desired information. The idea was independently developed by Maass [3] and Jaeger [4] as liquid state machine (LSM) and echo state machine (ESM), respectively. LSMs and ESMs, together with the more recently explored Backpropagation Decorrelation learning rule for RNNs [5], are given the generic term reservoir computing [6]. Typically large, complex RNNs are used as reservoirs, and their function resembles a tank of liquid. One can think of the input as stones thrown into the liquid, creating unique ripples that propagate, interact, and eventually fade away. After learning how to read the water’s surface, one can extract a lot of information about recent events, without having to do the complex input integration. Real water has successfully been used as a reservoir [7]. Because only the output weights are altered, training is typically quick and computationally efficient compared to training of other

References

[1]  K. Doya, “Universality of fully connected recurrent neural networks,” Tech. Rep., University of California, San Diego, Calif, USA, 1993, Submitted to: IEEE Transactions on Neural Networks.
[2]  M. Luko?evi?ius and H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Computer Science Review, vol. 3, no. 3, pp. 127–149, 2009.
[3]  T. Natschl?ger, W. Maass, and H. Markram, “The "liquid computer": a novel strategy for real-time computing on time series,” Special Issue on Foundations of Information Processing of TELEMATIK, vol. 8, no. 1, pp. 39–43, 2002.
[4]  H. Jaeger, “A tutorial on training recurrent neural networks, covering bppt, rtrl, and the echo state network approach,” Tech. Rep., Fraunhofer Institute for Autonomous Intelligent Systems, Sankt Augustin, Germany, 2002.
[5]  J. J. Steil, “Backpropagation-decorrelation: online recurrent learning with O(N) complexity,” in Proceedings of IEEE International Joint Conference on Neural Networks (IJCNN '04), pp. 843–848, July 2004.
[6]  B. Schrauwen, D. Verstraeten, and J. van Campenhout, “An overview of reservoir computing: theory, applications and implementations,” in Proceedings of the 15th European Symposium on Artificial Neural Networks, vol. 4, pp. 471–482, 2007.
[7]  C. Fernando and S. Sojakka, “Pattern recognition in a bucket,” in Advances in Artificial Life, Lecture Notes in computer Science, pp. 588–597, Springer, Berlin, Germany, 2003.
[8]  D. Nquyen-Tuong and J. Peters, “Model learning for robot control: a survey,” Cognitive Processing, vol. 12, no. 4, pp. 319–340, 2011.
[9]  M. Oubbati, M. Schanz, and P. Levi, “Kinematic and dynamic adaptive control of a nonholonomic mobile robot using a RNN,” in Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA '05), pp. 27–33, June 2005.
[10]  M. Kawato, “Feedback-error-learning neural network for supervised motor learning,” in Advanced Neural Computers, R. Eckmiller, Ed., pp. 365–372, Elsevier, Amsterdam, The Netherlands, 1990.
[11]  M. I. Jordan and D. E. Rumelhart, “Forward models: supervised learning with a distal teacher,” Cognitive Science, vol. 16, no. 3, pp. 307–354, 1992.
[12]  M. Kawato, “Internal models for motor control and trajectory planning,” Current Opinion in Neurobiology, vol. 9, no. 6, pp. 718–727, 1999.
[13]  R. A. L?vlid, “Learning to imitate YMCA with an ESN,” in Proceedings of the 22nd International Conference on Artificial Neural Networks and Machine Learning (ICANN '12), Lecture Notes in Computer Science, pp. 507–514, Springer, 2012.
[14]  R. A. L?vlid, “Learning motor control by dancing YMCA,” IFIP Advances in Information and Communication Technology, vol. 331, pp. 79–88, 2010.
[15]  A. Tidemann and P. ?ztürk, “Self-organizing multiple models for imitation: teaching a robot to dance the YMCA,” in IEA/AIE, vol. 4570 of Lecture Notes in Computer Science, pp. 291–302, Springer, Berlin, Germany, 2007.
[16]  H. Jaeger, et al., “Simple toolbox for esns,” 2009, http://reservoir-computing.org/software.
[17]  F. Toutounian and A. Ataei, “A new method for computing Moore-Penrose inverse matrices,” Journal of Computational and Applied Mathematics, vol. 228, no. 1, pp. 412–417, 2009.
[18]  F. Wyffels, B. Schrauwen, and D. Stroobandt, “Stable output feedback in reservoir computing using ridge regression,” in Proceedings of the 18th International Conference on Artificial Neural Networks, Part I (ICANN '08), pp. 808–817, Springer, 2008.
[19]  H. Jaeger, “The echo state approach to analysing and training recurrent neural networks,” Tech. Rep., GMD, 2001.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413