全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Increased Signal Complexity Improves the Breadth of Generalization in Auditory Perceptual Learning

DOI: 10.1155/2013/879047

Full-Text   Cite this paper   Add to My Lib

Abstract:

Perceptual learning can be specific to a trained stimulus or optimally generalized to novel stimuli with the breadth of generalization being imperative for how we structure perceptual training programs. Adapting an established auditory interval discrimination paradigm to utilise complex signals, we trained human adults on a standard interval for either 2, 4, or 10 days. We then tested the standard, alternate frequency, interval, and stereo input conditions to evaluate the rapidity of specific learning and breadth of generalization over the time course. In comparison with previous research using simple stimuli, the speed of perceptual learning and breadth of generalization were more rapid and greater in magnitude, including novel generalization to an alternate temporal interval within stimulus type. We also investigated the long term maintenance of learning and found that specific and generalized learning was maintained over 3 and 6 months. We discuss these findings regarding stimulus complexity in perceptual learning and how they can inform the development of effective training protocols. 1. Introduction Animals improve in the extraction and encoding of sensory information from the environment through perceptual learning. Psychophysical studies have established that practicing a task leads to specific improvements that are often restricted to stimuli used during training [1, 2].? ?While these paradigms typically utilise simple unisensory stimuli, the reverse hierarchy theory of perceptual learning is consistent with evidence that the “default” setting in perception is one of higher order complex objects. For example, ecologically it is unusual to be presented with simple pure tones in isolation, but rather the complex frequency changes present in vocal communication such as birdsong and human speech [3–5]. Auditory research shows that while specific learning is found in most tasks, generalization to novel stimuli is generally restricted to spectral features of the stimuli [6–11]. In contrast, generalization to temporal stimulus features appears to be very limited, although it has been found for transferral from interval to duration within the same stimulus length, and onset/offset asynchrony, respectively [12, 13]. With regard to generalization to new intervals/durations, although Lapid and colleagues reported such generalization [14], this is in contrast with the majority of studies in which no such transfer of learning is found [11, 12, 15] with Lapid’s study demonstrating generalization across stimulus types (Empty-Filled). This limitation of

References

[1]  A. Fiorentini and N. Berardi, “Perceptual learning specific for orientation and spatial frequency,” Nature, vol. 287, no. 5777, pp. 43–44, 1980.
[2]  S. P. McKee and G. Westheimer, “Improvement in vernier acuity with practice,” Perception and Psychophysics, vol. 24, no. 3, pp. 258–262, 1979.
[3]  G. F. Ball and S. H. Hulse, “Birdsong,” The American Psychologist, vol. 53, no. 1, pp. 37–58, 1998.
[4]  A. J. Doupe and P. K. Kuhl, “Birdsong and human speech: common themes and mechanisms,” Annual Review of Neuroscience, vol. 22, pp. 567–631, 1999.
[5]  R. H. Fitch, S. Miller, and P. Tallal, “Neurobiology of speech perception,” Annual Review of Neuroscience, vol. 20, pp. 331–353, 1997.
[6]  M. B. Fitzgerald and B. A. Wright, “A perceptual learning investigation of the pitch elicited by amplitude-modulated noise,” Journal of the Acoustical Society of America, vol. 118, no. 6, pp. 3794–3803, 2005.
[7]  S. Amitay, D. J. C. Hawkey, and D. R. Moore, “Auditory frequency discrimination learning is affected by stimulus variability,” Perception and Psychophysics, vol. 67, no. 4, pp. 691–698, 2005.
[8]  L. Demany and C. Semal, “Learning to perceive pitch differences,” Journal of the Acoustical Society of America, vol. 111, no. 3, pp. 1377–1388, 2002.
[9]  D. R. F. Irvine, R. L. Martin, E. Klimkeit, and R. Smith, “Specificity of perceptual learning in a frequencey discrimination task,” Journal of the Acoustical Society of America, vol. 108, no. 6, pp. 2964–2968, 2000.
[10]  C. Micheyl, J. G. W. Bernstein, and A. J. Oxenham, “Detection and F0 discrimination of harmonic complex tones in the presence of competing tones or noise,” Journal of the Acoustical Society of America, vol. 120, no. 3, pp. 1493–1505, 2006.
[11]  B. A. Wright, R. M. Wilson, and A. T. Sabin, “Generalization lags behind learning on an auditory perceptual task,” Journal of Neuroscience, vol. 30, no. 35, pp. 11635–11639, 2010.
[12]  U. R. Karmarkar and D. V. Buonomano, “Temporal specificity of perceptual learning in an auditory discrimination task,” Learning and Memory, vol. 10, no. 2, pp. 141–147, 2003.
[13]  J. A. Mossbridge, B. N. Scissors, and B. A. Wright, “Learning and generalization on asynchrony and order tasks at sound offset: implications for underlying neural circuitry,” Learning and Memory, vol. 15, no. 1, pp. 13–20, 2008.
[14]  E. Lapid, R. Ulrich, and T. Rammsayer, “Perceptual learning in auditory temporal discrimination: no evidence for a cross-modal transfer to the visual modality,” Psychonomic Bulletin and Review, vol. 16, no. 2, pp. 382–389, 2009.
[15]  B. A. Wright, D. V. Buonomano, H. W. Mahncke, and M. M. Merzenich, “Learning and generalization of auditory temporal-interval discrimination in humans,” Journal of Neuroscience, vol. 17, no. 10, pp. 3956–3963, 1997.
[16]  B. A. Wright and A. T. Sabin, “Perceptual learning: how much daily training is enough?” Experimental Brain Research, vol. 180, no. 4, pp. 727–736, 2007.
[17]  M. Ahissar, “Perceptual training: a tool for both modifying the brain and exploring it,” Proceedings of the National Academy of Sciences of the United States of America, vol. 98, no. 21, pp. 11842–11843, 2001.
[18]  M. Ahissar and S. Hochstein, “The reverse hierarchy theory of visual perceptual learning,” Trends in Cognitive Sciences, vol. 8, no. 10, pp. 457–464, 2004.
[19]  M. Ahissar, M. Nahum, I. Nelken, and S. Hochstein, “Reverse hierarchies and sensory learning,” Philosophical Transactions of the Royal Society B, vol. 364, no. 1515, pp. 285–299, 2009.
[20]  P. B. L. Meijer, “An experimental system for auditory image representations,” IEEE Transactions on Biomedical Engineering, vol. 39, no. 2, pp. 112–121, 1992.
[21]  A. Amedi, W. M. Stern, J. A. Camprodon et al., “Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex,” Nature Neuroscience, vol. 10, no. 6, pp. 687–689, 2007.
[22]  D. Brown, T. Macpherson, and J. Ward, “Seeing with sound? Exploring different characteristics of a visual-to-auditory sensory substitution device,” Perception, vol. 40, no. 9, pp. 1120–1135, 2011.
[23]  M. J. Proulx, P. Stoerig, E. Ludowig, and I. Knoll, “Seeing “where“ through the ears: effects of learning-by-doing and long-term sensory deprivation on localization based on image-to-sound substitution,” PLoS ONE, vol. 3, no. 3, Article ID e1840, 2008.
[24]  T. H. Rammsayer and D. Leutner, “Temporal discrimination as a function of marker duration,” Perception and Psychophysics, vol. 58, no. 8, pp. 1213–1223, 1996.
[25]  A. R. Bradlow, R. Akahane-Yamada, D. B. Pisoni, and Y. Tohkura, “Training Japanese listeners to identify english /r/and /1/: long-term retention of learning in perception and production,” Perception and Psychophysics, vol. 61, no. 5, pp. 977–985, 1999.
[26]  J. A. Mossbridge, M. B. Fitzgerald, E. S. O'Connor, and B. A. Wright, “Perceptual-learning evidence for separate processing of asynchrony and order tasks,” Journal of Neuroscience, vol. 26, no. 49, pp. 12708–12716, 2006.
[27]  D. H. Brainard, “The psychophysics toolbox,” Spatial Vision, vol. 10, no. 4, pp. 433–436, 1997.
[28]  M. Kleiner, D. Brainard, and D. Pelli, “What's new in psychtoolbox-3?” Perception, vol. 36, 2007, ECVP Abstract Supplement.
[29]  D. G. Pelli, “The VideoToolbox software for visual psychophysics: transforming numbers into movies,” Spatial Vision, vol. 10, no. 4, pp. 437–442, 1997.
[30]  J. I. Skipper, V. van Wassenhove, H. C. Nusbaum, and S. L. Small, “Hearing lips and seeing voices: how cortical areas supporting speech production mediate audiovisual speech perception,” Cerebral Cortex, vol. 17, no. 10, pp. 2387–2399, 2007.
[31]  V. van Wassenhove, K. W. Grant, and D. Poeppel, “Visual speech speeds up the neural processing of auditory speech,” Proceedings of the National Academy of Sciences of the United States of America, vol. 102, no. 4, pp. 1181–1186, 2005.
[32]  R. J. Zatorre, M. Bouffard, P. Ahad, and P. Belin, “Where is “where“ in the human auditory cortex?” Nature Neuroscience, vol. 5, no. 9, pp. 905–909, 2002.
[33]  L. Shams and A. R. Seitz, “Benefits of multisensory learning,” Trends in Cognitive Sciences, vol. 12, no. 11, pp. 411–417, 2008.
[34]  M. J. Proulx, D. J. Brown, A. Pasqualotto, and P. Meijer, “Multisensory perceptual learning and sensory substitution,” Neuroscience and Biobehavioral Reviews, 2012.
[35]  J. Ward and P. Meijer, “Visual experiences in the blind induced by an auditory sensory substitution device,” Consciousness and Cognition, vol. 19, no. 1, pp. 492–500, 2010.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133