全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

A Radial Basis Function Spike Model for Indirect Learning via Integrate-and-Fire Sampling and Reconstruction Techniques

DOI: 10.1155/2012/713581

Full-Text   Cite this paper   Add to My Lib

Abstract:

This paper presents a deterministic and adaptive spike model derived from radial basis functions and a leaky integrate-and-fire sampler developed for training spiking neural networks without direct weight manipulation. Several algorithms have been proposed for training spiking neural networks through biologically-plausible learning mechanisms, such as spike-timing-dependent synaptic plasticity and Hebbian plasticity. These algorithms typically rely on the ability to update the synaptic strengths, or weights, directly, through a weight update rule in which the weight increment can be decided and implemented based on the training equations. However, in several potential applications of adaptive spiking neural networks, including neuroprosthetic devices and CMOS/memristor nanoscale neuromorphic chips, the weights cannot be manipulated directly and, instead, tend to change over time by virtue of the pre- and postsynaptic neural activity. This paper presents an indirect learning method that induces changes in the synaptic weights by modulating spike-timing-dependent plasticity by means of controlled input spike trains. In place of the weights, the algorithm manipulates the input spike trains used to stimulate the input neurons by determining a sequence of spike timings that minimize a desired objective function and, indirectly, induce the desired synaptic plasticity in the network. 1. Introduction This paper presents a deterministic and adaptive spike model obtained from radial basis functions (RBFs) and a leaky integrate-and-fire (LIF) sampler for the purpose of training spiking neural networks (SNNs), without directly manipulating the synaptic weights. Spiking neural networks are computational models of biological neurons comprised of systems of differential equations that can reproduce some of the spike patterns and dynamics observed in real neuronal networks [1, 2]. Recently, SNNs have also been shown capable of simulating sigmoidal artificial neural networks (ANNs) and of solving small-dimensional nonlinear function approximation problems through reinforcement learning [3–5]. Like all ANN learning techniques, existing SNN training algorithms rely on the direct manipulation of the synaptic weights [4–9]. In other words, the learning algorithms typically include a weight-update rule by which the synaptic weights are updated over several iterations, based on the reinforcement signal or network performance. In many potential SNN applications, including neuroprosthetic devices, light-sensitive neuronal networks grown in vitro, and CMOS/memristor nanoscale

References

[1]  J. J. B. Jack, D. Nobel, and R. Tsien, Electric Current Flow in Excitable Cells, Oxford University Press, Oxford, UK, 1st edition, 1975.
[2]  A. L. Hodgkin and A. F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” The Journal of Physiology, vol. 117, no. 4, pp. 500–544, 1952.
[3]  W. Maass, “Noisy spiking neurons with temporal coding have more computational power than sigmoidal neurons,” Advances in Neural Information Processing Systems, vol. 9, pp. 211–217, 1997.
[4]  C. M. A. Pennartz, “Reinforcement learning by Hebbian synapses with adaptive thresholds,” Neuroscience, vol. 81, no. 2, pp. 303–319, 1997.
[5]  S. Ferrari, B. Mehta, G. Di Muro, A. M. J. VanDongen, and C. Henriquez, “Biologically realizable reward-modulated hebbian training for spiking neural networks,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '08), pp. 1780–1786, Hong Kong, June 2008.
[6]  R. Legenstein, C. Naeger, and W. Maass, “What can a neuron learn with spike-timing-dependent plasticity?” Neural Computation, vol. 17, no. 11, pp. 2337–2382, 2005.
[7]  J. P. Pfister, T. Toyoizumi, D. Barber, and W. Gerstner, “Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning,” Neural Computation, vol. 18, no. 6, pp. 1318–1348, 2006.
[8]  R. V. Florian, “Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity,” Neural Computation, vol. 19, no. 6, pp. 1468–1502, 2007.
[9]  S. G. Wysoski, L. Benuskova, and N. Kasabov, “Adaptive learning procedure for a network of spiking neurons and visual pattern recognition,” in Proceedings of the 8th International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS '06), vol. 4179 of Lecture Notes in Computer Science, pp. 1133–1142, Antwerp, Belgium, September 2006.
[10]  S. H. Jo, T. Chang, I. Ebong, B. B. Bhadviya, P. Mazumder, and W. Lu, “Nanoscale memristor device as synapse in neuromorphic systems,” Nano Letters, vol. 10, no. 4, pp. 1297–1301, 2010.
[11]  A. M. VanDongen, “Vandongen laboratory,” http://www.vandongen-lab.com/.
[12]  T. J. Van De Ven, H. M. A. VanDongen, and A. M. J. VanDongen, “The nonkinase phorbol ester receptor α1-chimerin binds the NMDA receptor NR2A subunit and regulates dendritic spine density,” Journal of Neuroscience, vol. 25, no. 41, pp. 9488–9496, 2005.
[13]  P. Dayan and L. F. Abbott, Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems, MIT Press, Cambridge, Mass, USA, 2001.
[14]  W. Gerstner and W. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity, Cambridge University Press, Cambridge, UK, 2006.
[15]  G. Foderaro, C. Henriquez, and S. Ferrari, “Indirect training of a spiking neural network for flight control via spike-timing-dependent synaptic plasticity,” in Proceedings of the 49th IEEE Conference on Decision and Control (CDC '10), pp. 911–917, Atlanta, Ga, USA, December 2010.
[16]  A. Aldroubi and K. Gr?chenig, “Nonuniform sampling and reconstruction in shift-invariant spaces,” SIAM Review, vol. 43, no. 4, pp. 585–620, 2001.
[17]  A. A. Lazar and L. T. Toth, “A toeplitz formulation of a real-time algorithm for time decoding machines,” in Proceedings of the Telecommunication Systems, Modeling and Analysis Conference, 2003.
[18]  E. M. Izhikevich, “Which model to use for cortical spiking neurons?” IEEE Transactions on Neural Networks, vol. 15, no. 5, pp. 1063–1070, 2004.
[19]  E. M. Izhikevich, “Simple model of spiking neurons,” IEEE Transactions on Neural Networks, vol. 14, no. 6, pp. 1569–1572, 2003.
[20]  A. N. Burkitt, “A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input,” Biological Cybernetics, vol. 95, no. 1, pp. 1–19, 2006.
[21]  S. Song, K. D. Miller, and L. F. Abbott, “Competitive Hebbian learning through spike-timing-dependent synaptic plasticity,” Nature Neuroscience, vol. 3, no. 9, pp. 919–926, 2000.
[22]  P. Sjostrom, G. Turrigiano, and S. Nelson, “Rate, timing, and cooperativity jointly determine cortical synaptic plasticity,” Neuron, vol. 32, no. 6, pp. 1149–1164, 2001.
[23]  S. Ferrari and R. Stengel, “Model-based adaptive critic designs,” in Learning and Approximate Dynamic Programming, J. Si, A. Barto, and W. Powell, Eds., John Wiley & Sons, 2004.
[24]  R. E. Bellman, Dynamic Programming, Princeton University Press, Princeton, NJ, USA, 1957.
[25]  R. Howard, Dynamic Programming and Markov Processes, MIT Press, Cambridge, Mass, USA, 1960.
[26]  A. M. J. VanDongen, J. Codina, J. Olate et al., “Newly identified brain potassium channels gated by the guanine nucleotide binding protein G(o),” Science, vol. 242, no. 4884, pp. 1433–1437, 1988.
[27]  J. E. Dennis and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, SIAM, Englewood Cliffs, NJ, USA, 1996.
[28]  H. G. Feichtinger, J. C. Príncipe, J. L. Romero, A. Singh Alvarado, and G. A. Velasco, “Approximate reconstruction of bandlimited functions for the integrate and fire sampler,” Advances in Computational Mathematics, vol. 36, no. 1, pp. 67–78, 2012.
[29]  R. F. Stengel, Optimal Control and Estimation, Dover Publications, Inc., 1986.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413