全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Quantitative Tools for Examining the Vocalizations of Juvenile Songbirds

DOI: 10.1155/2012/261010

Full-Text   Cite this paper   Add to My Lib

Abstract:

The singing of juvenile songbirds is highly variable and not well stereotyped, a feature that makes it difficult to analyze with existing computational techniques. We present here a method suitable for analyzing such vocalizations, windowed spectral pattern recognition (WSPR). Rather than performing pairwise sample comparisons, WSPR measures the typicality of a sample against a large sample set. We also illustrate how WSPR can be used to perform a variety of tasks, such as sample classification, song ontogeny measurement, and song variability measurement. Finally, we present a novel measure, based on WSPR, for quantifying the apparent complexity of a bird’s singing. 1. Introduction A bird’s song can be a powerful marker of identity, used by other birds—and humans—to identify the singer’s species or even to identify a single individual. In many species this song is innate, but for the Oscine songbirds, every bird must acquire its own song [1, 2]. With one such bird, the zebra finch (Taeniopygia guttata), it is the males that sing, and juvenile males learn their song from nearby adults such as their father [3]. The learning process has two overlapping but distinct parts: in the first, the animal hears the songs of other birds and somehow commits to memory a model of the song it will sing; in the second, the animal learns how to produce a version of this memorised song through practice [1]. As adults, zebra finches sing in bouts during which they perform their single song motif a variable number of times. The song motif of a zebra finch is on the order of one second long and is composed of multiple syllables, elements separated by silence or a sharp drop in amplitude. Syllables can often be broken down further into notes, segments of distinct sound quality. These notes may demonstrate pronounced frequency modulation and complex harmonics. Adult zebra finches typically exhibit a very high degree of stereotypy in their song, with one performance of the song’s motif being very similar to any other. Two typical examples are shown in Figure 1. Figure 1: (a) Spectrogram of a bout of singing from an adult zebra finch. Noted in the figure are the following song parts: introductory notes, underlined in red; syllables, underlined in green; the silent interval between syllables, underlined in yellow. The blue lines mark the repetitions of the bird’s motif. Note that each performance of the motif appears much like the others, except for the truncated final motif. (b) Spectrogram of a bout of singing from a different zebra finch. Although its song is also highly

References

[1]  P. Marler, “Song learning: the interface between behaviour and neuroethology,” Philosophical Transactions of the Royal Society of London Series B, vol. 329, no. 1253, pp. 109–114, 1990.
[2]  F. Nottebohm, “The origins of vocal learning,” American Naturalist, vol. 106, pp. 116–140, 1972.
[3]  J. B?hner, “Song learning in the zebra finch (Taeniopygia guttata): selectivity in the choice of a tutor and accuracy of song copies,” Animal Behaviour, vol. 31, no. 1, pp. 231–237, 1983.
[4]  A. J. Doupe and P. K. Kuhl, “Birdsong and human speech: common themes and mechanisms,” Annual Review of Neuroscience, vol. 22, pp. 567–631, 1999.
[5]  K. Immelmann, N. W. Cayley, and A. H. Chisholm, Australian Finches in Bush and Aviary, Angus and Robertson, Sydney, Australia, 1967.
[6]  R. Specht, Avisoft-SAS lab pro., 2004.
[7]  The Cornell Lab of Ornithology, Raven: Interactive sound analysis software, 2010.
[8]  H. Sakoe and S. Chiba, “Dynamic programming algorithm optimization for spoken word recognition,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 26, no. 1, pp. 43–49, 1978.
[9]  J. A. Kogan and D. Margoliash, “Automated recognition of bird song elements from continuous recordings using dynamic time warping and hidden Markov models: a comparative study,” Journal of the Acoustical Society of America, vol. 103, no. 4, pp. 2185–2196, 1998.
[10]  O. Tchernichovski, F. Nottebohm, C. E. Ho, B. Pesaran, and P. P. Mitra, “A procedure for an automated measurement of song similarity,” Animal Behaviour, vol. 59, no. 6, pp. 1167–1176, 2000.
[11]  L. Ranjard, M. G. Anderson, M. J. Rayner et al., “Bioacoustic distances between the begging calls of brood parasites and their host species: a comparison of metrics and techniques,” Behavioral Ecology and Sociobiology, vol. 64, no. 11, pp. 1915–1926, 2010.
[12]  S. Saar and P. P. Mitra, “A technique for characterizing the development of rhythms in bird song,” PLoS One, vol. 3, no. 1, Article ID e1461, 2008.
[13]  O. Feher, H. Wang, S. Saar, P. P. Mitra, and O. Tchernichovski, “De novo establishment of wild-type song culture in the zebra finch,” Nature, vol. 459, no. 7246, pp. 564–568, 2009.
[14]  J. Coalson, FLAC—free lossless audio codec, 2007.
[15]  Audacity Development Team, Audacity: free audio editor and recorder, 2010.
[16]  P. J. Rousseeuw, “Silhouettes: a graphical aid to the interpretation and validation of cluster analysis,” Journal of Computational and Applied Mathematics, vol. 20, no. C, pp. 53–65, 1987.
[17]  M. Maechler, P. J. Rousseeuw, A. Struyf, and M. Hubert, Cluster analysis basics and extensions, 2005.
[18]  Team RDC, R: a language and environment for statistical computing, 2010.
[19]  B. W. Matthews, “Comparison of the predicted and observed secondary structure of T4 phage lysozyme,” Biochimica et Biophysica Acta, vol. 405, no. 2, pp. 442–451, 1975.
[20]  J. A. Hartigan and M. A. Wong, “Algorithm AS 136: a K-means clustering algorithm,” Journal of the Royal Statistical Society Series C, vol. 28, pp. 100–108, 1979.
[21]  D. C. Airey and T. J. DeVoogd, “Greater song complexity is associated with augmented song system anatomy in zebra finches,” NeuroReport, vol. 11, no. 8, pp. 1749–1754, 2000.
[22]  G. Tononi, O. Sporns, and G. M. Edelman, “A measure for brain complexity: relating functional segregation and integration in the nervous system,” Proceedings of the National Academy of Sciences of the United States of America, vol. 91, no. 11, pp. 5033–5037, 1994.
[23]  J. P. Crutchfield and K. Young, “Inferring statistical complexity,” Physical Review Letters, vol. 63, no. 2, pp. 105–108, 1989.
[24]  P. Grassberger, “Toward a quantitative theory of self-generated complexity,” International Journal of Theoretical Physics, vol. 25, no. 9, pp. 907–938, 1986.
[25]  W. Bialek, I. Nemenman, and N. Tishby, “Predictability, complexity, and learning,” Neural Computation, vol. 13, no. 11, pp. 2409–2463, 2001.
[26]  T. M. Cover, J. A. Thomas, and J. Wiley, Elements of Information Theory, John Wiley & Sons, New York, NY, USA, 1991.
[27]  S. Kullback and R. A. Leibler, “On information and sufficiency,” The Annals of Mathematical Statistics, vol. 22, pp. 79–86, 1951.
[28]  R. M. Gray, “Vector quantization,” IEEE ASSP Magazine, vol. 1, no. 2, pp. 4–29, 1984.
[29]  S. H. Nawab and T. F. Quatieri, “Short-time fourier transform,” in Advanced Topics in Signal Processing, J. Lim and A. Oppenheim, Eds., pp. 289–337, Prentice Hall, Upper Saddle River, NJ, USA, 1987.

Full-Text

Contact Us

[email protected]

QQ:3279437679

WhatsApp +8615387084133