全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Sound-Environment Monitoring Method Based on Computational Auditory Scene Analysis

DOI: 10.4236/jsip.2017.82005, PP. 65-77

Keywords: Sound-Environment Visualization, Environmental Sounds, Monitoring, Painted Sound Patterns, Synesthesia

Full-Text   Cite this paper   Add to My Lib

Abstract:

Monitoring techniques are a key technology for examining the conditions in various scenarios, e.g., structural conditions, weather conditions, and disasters. In order to understand such scenarios, the appropriate extraction of their features from observation data is important. This paper proposes a monitoring method that allows sound environments to be expressed as a sound pattern. To this end, the concept of synesthesia is exploited. That is, the keys, tones, and pitches of the monitored sound are expressed using the three elements of color, that is, the hue, saturation, and brightness, respectively. In this paper, it is assumed that the hue, saturation, and brightness can be detected from the chromagram, sonogram, and sound spectrogram, respectively, based on a previous synesthesia experiment. Then, the sound pattern can be drawn using color, yielding a “painted sound map.” The usefulness of the proposed monitoring technique is verified using environmental sound data observed at a galleria.

References

[1]  Hamamoto, T. (2015) Structural Health Monitoring of Buildings. Transactions of Foundation Engineering & Equipment, 43, 17-20. (In Japanese)
[2]  Chachada, J.S. and Kuo, C.-C.J. (2014) Environmental Sound Recognition: A Survey. SIP (2014), Vol. 3, e14, 1-15.
https://www.cambridge.org/core/services/aop-cambridge-core/content/view/S2048770314000122
[3]  Mitrovic, D., Zeppelzauer, M. and Breiteneder, C. (2010) Features for Content-Based Audio Retrieval. In: Advances in Computers, Vol. 78, Elsevier, Amsterdam, 71-150.
[4]  Deng, J.D., Simmermacher, C. and Cranefield, S. (2008) A Study on Feature Analysis for Musical Instrument Classification. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 38, 429-438.
https://doi.org/10.1109/TSMCB.2007.913394
[5]  Peltonen, V., Tuomi, J., Klapuri, A., Huopaniemi, J. and Sorsa, T. (2002) Computational Auditory Scene Recognition. 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Orlando, FL, 13-17 May 2002, II-1941-II-1944.
[6]  Potamitis, I. and Ganchev, T. (2008) Generalized Recognition of Sound Events: Approaches and Applications. In: Tsihrintzis, G.A. and Jain, L.C., Eds., Multimedia Services in Intelligent Environments, Springer, Berlin, Heidelberg, 41-79.
[7]  Wang, J.-C., Wang, J.-F., He, K.W. and Hsu, C.-S. (2006) Environmental Sound Classification Using Hybrid SVM/KNN Classifier and MPEG-7 Audio Low-Level Descriptor. International Joint Conference on Neural Networks, Vancouver, 16-21 July 2006, 1731-1735.
[8]  Muhammad, G., Alotaibi, Y.A., Alsulaiman, M. and Huda, M.N. (2010) Environment Recognition Using Selected MPEG-7 Audio Features and Mel-Frequency Cepstral Coefficients. 2010 5th International Conference on Digital Telecommunications (ICDT), Athens, 13-19 June 2010, 11-16.
https://doi.org/10.1109/ICDT.2010.10
[9]  Tsau, E., Kim, S.-H. and Kuo, C.-C.J. (2011) Environmental Sound Recognition with CELP-Based Features. 2011 10th International Symposium on Signals, Circuits and Systems (ISSCS), lasi, 30 June-1 July 2011, 1-4.
https://doi.org/10.1109/ISSCS.2011.5978729
[10]  Chu, S., Narayanan, S. and Kuo, C.-C.J. (2009) Environmental Sound Recognition with Time-Frequency Audio Features. IEEE Transactions on Audio, Speech, and Language Processing, 17, 1142-1158.
https://doi.org/10.1109/TASL.2009.2017438
[11]  The Color Science Association of Japan (2011) Handbook of Color Science. 3rd Edition, University of Tokyo Press, Japan. (In Japanese)
[12]  Cytowic, R.E. (2003) The Man Who Tasted Shapes. MIT Press, Cambridge, MA.
[13]  Nagata, N., Iwai, D., Tsusa, M., Wake, S.H. and Inokuchi, S. (2003) Non-Verbal Mapping between Sound and Color-Mapping Derived from Colored Hearing Possessors and Its Applications. IEICE, J86-A, 1219-1230. (In Japanese)
[14]  Schmidt, R.O. (1986) Multiple Emitter Location and Signal Parameter Estimation. IEEE Transactions on Antennas and Propagation, 34, 276-280.
https://doi.org/10.1109/TAP.1986.1143830
[15]  Muller, M. and Ewert, S. (2011) Chroma Toolbox: Matlab Implementations for Extracting Variants of Chroma-Based Audio Features. Proceedings of the 12th International Society for Music Information Retrieval Conference (ISMIR 2011), Miami, 24-28 October 2011, 215-220.
[16]  http://www.pampalk.at/ma/documentation.html
[17]  Frey, B.J. and Dueck, D. (2007) Clustering by Passing Messages between Data Points. Science, 315, 972-976.
https://doi.org/10.1126/science.1136800
[18]  Wang, R., Zhang, J., Li, D., Zhang, X. and Guo, T. (2007) Adaptive Affinity Propagation Clustering. Acta Automatica Sinica, 33, 1242-1246.
[19]  https://jp.mathworks.com/matlabcentral/fileexchange/18244-adaptive-affinity-propagation-clustering
[20]  Calinski, T. and Harabaz, J. (1974) A Dendrite Method for Cluster Analysis. Communications in Statistics, 3, 1-27.
https://doi.org/10.1080/03610927408827101

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133