全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Simulate Human Saccadic Scan-Paths in Target Searching

DOI: 10.4236/ijis.2016.61001, PP. 1-9

Keywords: Saccadic Scan-Paths, Eye Movement, Fixation Locations, Dynamic Scan-Paths

Full-Text   Cite this paper   Add to My Lib

Abstract:

Human saccade is a dynamic process of information pursuit. There are many methods using either global context or local context cues to model human saccadic scan-paths. In contrast to them, this paper introduces a model for gaze movement control using both global and local cues. To test the performance of this model, an experiment is done to collect human eye movement data by using an SMI iVIEW X Hi-Speed eye tracker with a sampling rate of 1250 Hz. The experiment used a two-by-four mixed design with the location of the targets and the four initial positions. We compare the saccadic scan-paths generated by the proposed model against human eye movement data on a face benchmark dataset. Experimental results demonstrate that the simulated scan-paths by the proposed model are similar to human saccades in term of the fixation order, Hausdorff distance, and prediction accuracy for both static fixation locations and dynamic scan-paths.

References

[1]  Rensink, R., O’Regan, K. and Clark, J. (1997) To See or Not to See: The Need for Attention to Perceive Changes in Scenes. Psychological Sciences, 8, 368-373.
[2]  Tsotsos, J.K., Itti, L. and Rees, G. (2005) A Brief and Selective History of Attention. In: Itti, Rees and Tsotsos, Eds., Neurobiology of Attention, Academic Press, Salt Lake City, xxiii-xxxii.
[3]  Mital, P., Smith, T., Hill, R. and Henderson, J. (2011) Clustering of Gaze during Dynamic Scene Viewing Is Predicted by Motion. Cognitive Computation, 3, 5-24.
http://dx.doi.org/10.1007/s12559-010-9074-z
[4]  Zelinsky, G., Zhang, W., Yu, B., Chen, X. and Samaras, D. (2006) The Role of Top-Down and Bottom-Up Processes in Guiding Eye Movements during Visual Search. Advances in Neural Information Processing Systems, Vancouver, 5-8 December 2005, 1407-1414.
[5]  Milanese, R., Wechsler, H., Gil, S., Bost, J. and Pun, T. (1997) Integration of Bottom-Up and Top-Down Cues for Visual Attention Using Non-Linear Relaxation. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Hilton Head, 21-23 June 1994, 781-785.
[6]  Tsotsos, J., Culhane, S., Wai, W., Lai, Y., Davis, N. and Nuflo, F. (1995) Modeling Visual Attention via Selective Tuning. Artificial Intelligence, 78, 507-545.
http://dx.doi.org/10.1016/0004-3702(95)00025-9
[7]  Navalpakkam, V., Rebesco, J. and Itti, L. (2005) Modeling the Influence of Task on Attention. Vision Research, 45, 205-221.
http://dx.doi.org/10.1016/j.visres.2004.07.042
[8]  Chun, M. and Jiang, Y. (1998) Contextual Cueing: Implicit Learning and Memory of Visual Context Guides Spatial Attention. Cognitive Psychology, 36, 28-71.
http://dx.doi.org/10.1006/cogp.1998.0681
[9]  Chun, M. (2000) Contextual Cueing of Visual Attention. Trends in Cognitive Sciences, 4, 170-178.
http://dx.doi.org/10.1016/S1364-6613(00)01476-5
[10]  Henderson, J., Weeks Jr., P. and Hollingworth, A. (1999) The Effects of Semantic Consistency on Eye Movements during Complex Scene Viewing. Journal of Experimental Psychology: Human Perception and Performance, 25, 210-222.
http://dx.doi.org/10.1037/0096-1523.25.1.210
[11]  Treisman, A. and Gelade, G. (1980) A Feature Integration Theory of Attention. Cognitive Psychology, 12, 97-136.
http://dx.doi.org/10.1016/0010-0285(80)90005-5
[12]  Wolfe, J.M. (1994) Guided Search 2.0: A Revised Model of Visual Search. Psychonomic Bulletin & Review, 1, 202-238.
http://dx.doi.org/10.3758/BF03200774
[13]  Wolfe, J.M. (2007) Guided Search 4.0: Current Progress with a Model of Visual Search. In: Gray, W., Ed., Integrated Models of Cognitive Systems, Oxford Press, New York.
http://dx.doi.org/10.1093/acprof:oso/9780195189193.003.0008
[14]  Wolfe, J.M., Cave, K.R. and Franzel, S.L. (1989) Guided Search: An Alternative to the Feature Integration Model for Visual Search. Journal of Experimental Psychology: Human Perception and Performance, 15, 419-433.
http://dx.doi.org/10.1037/0096-1523.15.3.419
[15]  Zelinsky, G.J. (2008) A Theory of Eye Movements during Target Acquisition. Psychological Review, 115, 787-835.
http://dx.doi.org/10.1037/a0013118
[16]  Torralba, A. (2003) Contextual Priming for Object Detection. International Journal of Computer Vision, 53, 169-191.
http://dx.doi.org/10.1023/A:1023052124951
[17]  Ehinger, K., Hidalgo-Sotelo, B., Torralba, A. and Oliva, A. (2009) Modelling Search for People in 900 Scenes: A Combined Source Model of Eye Guidance. Visual Cognition, 17, 945-978.
http://dx.doi.org/10.1080/13506280902834720
[18]  Paletta, L. and Greindl, C. (2003) Context Based Object Detection from Video. In: Proceedings of International Conference on Computer Vision Systems, Graz, 502-512.
http://dx.doi.org/10.1007/3-540-36592-3_48
[19]  Kruppa, H., Santana, M. and Schiele, B. (2003) Fast and Robust Face Finding via Local Context. In: Proceedings of Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, Nice, France, 11-12 October, 2003, 1-8.
[20]  Miao, J., Chen, X., Gao, W. and Chen, Y. (2006) A Visual Perceiving and Eyeball-Motion Controlling Neural Network for Object Searching and Locating. Proceedings of International Joint Conference on Neural Networks, Vancouver, 4395-4400.
[21]  Miao, J., Zou, B., Qing, L., Duan, L. and Fu, Y. (2010) Learning Internal Representation of Visual Context in a Neural Coding Network. Proceedings of the International Conference on Artificial Neural Networks, Thessaloniki, 15-18 September 2010, 174-183.
http://dx.doi.org/10.1007/978-3-642-15819-3_22
[22]  Miao, J., Qing, L., Zou, B., Duan, L. and Gao, W. (2010) Top-Down Gaze Movement Control in Target Search Using Population Cell Coding of Visual Context. IEEE Transactions on Autonomous Mental Development, 2, 196-215.
[23]  Miao, J., Duan, L., Qing, L. and Qiao, Y. (2011) An Improved Neural Architecture for Gaze Movement Control in Target Searching. Proceedings of the IEEE International Joint Conference on Neural Networks, San Jose, 31 July-5 August 2011, 2341-2348.
[24]  The Face Database of the University of Bern (2008).
http://www.iam.unibe.ch/fki/databases/iam-faces-database
[25]  Lee, T. and Yu, S. (2002) An Information-Theoretic Framework for Understanding Saccadic Eye Movements. Advanced in Neural Information Processing System, 12, 834-840.
[26]  Renninger, L., Verghese, P. and Coughlan, J. (2007) Where to Look Next? Eye Movements Reduce Local Uncertainty. Journal of Vision, 7, 6.
[27]  Itti, L., Koch, C. and Niebur, E. (1998) A Model of Saliency Based Visual Attention for Rapid Scene Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1254-1259.
[28]  Foulsham, T. and Underwood, G. (2011) If Visual Saliency Predicts Search, Then Why? Evidence from Normal and Gaze-Contingent Search Tasks in Natural Scenes. Cognitive Computation, 3, 48-63.
http://dx.doi.org/10.1007/s12559-010-9069-9
[29]  Wischnewski, M., Belardinelli, A., Schneider, W. and Steil, J. (2010) Where to Look Next? Combining Static and Dynamic Proto-Objects in a TVA-Based Model of Visual Attention. Cognitive Computation, 2, 326-343.
http://dx.doi.org/10.1007/s12559-010-9080-1
[30]  Kootstra, G., de Boer, B. and Schomaker, L. (2011) Predicting Eye Fixations on Complex Visual Stimuli Using Local Symmetry. Cognitive Computation, 3, 223-240.
http://dx.doi.org/10.1007/s12559-010-9089-5
[31]  de Croon, G., Postma, E. and van den Herik, H. (2011) Adaptive Gaze Control for Object Detection. Cognitive Computation, 3, 264-278.
http://dx.doi.org/10.1007/s12559-010-9093-9
[32]  Ojala, T., Pietikainen, M. and Harwood, D. (1996) A Comparative Study of Texture Measures with Classification Based on Featured Distribution. Pattern Recognition, 29, 51-59. http://dx.doi.org/10.1016/0031-3203(95)00067-4
[33]  Miao, J., Chen, X., Gao, W. and Chen, Y. (2006) A Visual Perceiving and Eyeball-Motion Controlling Neural Network for Object Searching and Locating. Proceedings of the International Joint Conference on Neural Networks, Vancouver, 16-21 July 2006, 4395-4400.
[34]  Miao, J., Duan, L.J., Qing, L.Y., Gao, W. and Chen, Y.Q. (2007) Learning and Memory on Spatial Relationship by a Neural Network with Sparse Features. Proceedings of the International Joint Conference on Neural Networks, Orlando, 12-17 August 2007, 1-6.
http://dx.doi.org/10.1109/ijcnn.2007.4371293

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133