全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于深度稠密时空兴趣点的人体动作描述算法*

DOI: 10.16451/j.cnki.issn1003-6059.201510009, PP. 939-945

Keywords: 深度数据,稠密时空兴趣点,人体动作描述,轨迹跟踪

Full-Text   Cite this paper   Add to My Lib

Abstract:

目前基于深度数据的动作识别算法得到极大关注,至今仍无一种鲁棒、区分性好的基于深度数据的动作描述算法.针对该问题,文中提出基于深度稠密时空兴趣点的人体动作描述算法.该算法选择多尺度深度稠密特征时空兴趣点,跟踪兴趣点并保存对应轨迹,基于轨迹信息描述动作.通过在DHA、MSRAction3D和UTKinect深度动作数据集上评估可知,与一些代表性算法相比,文中算法性能更优.

References

[1]  Lin Y C, Hu M C, Cheng W H, et al. Human Action Recognition and Retrieval Using Sole Depth Information // Proc of the 20th ACM International Conference on Multimedia. Nara, Japan, 2012: 1053-1056
[2]  Wang J, Liu Z C, Wu Y, et al. Mining Actionlet Ensemble for Action Recognition with Depth Cameras // Proc of the IEEE Con-ference on Computer Vision and Pattern Recognition. Providence, USA, 2012: 1290-1297
[3]  Li W Q, Zhang Z Y, Liu Z C. Action Recognition Based on a Bag of 3D Points // Proc of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. San Francisco, USA, 2010: 9-14
[4]  Ni B B, Wang G, Moulin P. RGBD-HuDaAct: A Color-Depth Vi-deo Database for Human Daily Activity Recognition // Proc of the IEEE International Conference on Computer Vision Workshops. Barcelona, Spain, 2011: 1147-1153
[5]  Megavannan V, Agarwal B, Venkatesh Babu R. Human Action Recognition Using Depth Maps // Proc of the International Conference on Signal Processing and Communications. Bangalore, India, 2012. DOI: 10.1109/SPCOM.2012.6290032
[6]  Bobick A F, Davis J W. The Recognition of Human Movement Using Temporal Templates. IEEE Trans on Pattern Analysis and Machine Intelligence, 2001, 23(3): 257-267
[7]  Kellokumpu V, Pietikinen M, Heikkil J. Human Activity Recognition Using Sequences of Postures // Proc of the IAPR Conference on Machine Vision Applications. Tsukuba Science City, Japan, 2005: 570-573
[8]  Dollar P, Rabaud V, Cottrell G, et al. Behavior Recognition via Sparse Spatio-Temporal Features // Proc of the 2nd Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance. Beijing, China, 2005: 65-72
[9]  Laptev I, Lindeberg T. Space-Time Interest Points // Proc of the 9th IEEE International Conference on Computer Vision. Nice, France, 2003, I: 432-439
[10]  Wang H, Ullah M M, Klser A, et al. Evaluation of Local Spatio-Temporal Features for Action Recognition // Proc of the British Machine Vision Conference. London, UK, 2009. DOI: 10.5244/C.23.124
[11]  Farnebck G. Two-Frame Motion Estimation Based on Polynomial Expansion // Proc of the 13th Scandinavian Conference on Image Analysis. Halmstad, Sweden, 2003: 363-370
[12]  Willems G, Tuytelaars T, Van Gool L. An Efficient Dense and Scale-Invariant Spatio-Temporal Interest Point Detector // Proc of the 10th European Conference on Computer Vision. Marseille, France, 2008: 650-663
[13]  Laptev I, Marszaek M, Schmid C, et al. Learning Realistic Human Actions from Movies // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, USA, 2008. DOI: 10.1109/CVPR.2008.4587756
[14]  Xia L, Chen C C, Aggarwal J K. View Invariant Human Action Recognition Using Histograms of 3D Joints // Proc of the IEEE Computer Society Conference on Computer Vision and Pattern Re-cognition. Providence, USA, 2012: 20-27
[15]  Klser A, Marszaek M, Schmid C. A Spatio-Temporal Descriptor Based on 3D-Gradients // Proc of the British Machine Vision Conference. Leeds, UK, 2008. DOI: 10.5244/C.22.99
[16]  Dalal N, Triggs B. Histograms of Oriented Gradients for Human Detection // Proc of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, USA, 2005, I: 886-893
[17]  Shen X X, Zhang H, Gao Z, et al. Behavior Recognition Algorithm Based on Depth Information and RGB Image. Pattern Recognition and Artificial Intelligence, 2013, 26(8): 722-728 (in Chinese)(申晓霞,张 桦,高 赞,等.基于深度信息和RGB图像的行为识别算法.模式识别与人工智能, 2013, 26(8): 722-728)
[18]  Therodorakopoulos I, Kastaniotis D, Economou G, et al. Pose-Based Human Action Recognition via Sparse Representation in Dissimilarity Space. Journal of Visual Communication and Image Re-presentation, 2014, 25(1): 12-23
[19]  Chen M Y, Hauptmann A. MoSIFT: Recognizing Human Actions in Surveillance Videos. Technical Report, CMU-CS-09-161. Pittsburgh, USA: Carnegie Mellon University, 2009
[20]  Gao Z, Song J M, Zhang H, et al. Human Action Recognition via Multi-modality Information. Journal of Electrical Engineering and Technology, 2014, 9(2): 739-748
[21]  Yang X D, Zhang C Y, Tian Y L. Recognizing Actions Using Depth Motion Maps-Based Histograms of Oriented Gradients // Proc of the 20th ACM International Conference on Multimedia. Nara, Japan, 2012: 1057-1060
[22]  Xia L, Aggarwal J K. Spatio-Temporal Depth Cuboid Similarity Feature for Activity Recognition Using Depth Camera // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA, 2013: 2834-2841
[23]  Oreifej O, Liu Z C. HON4D: Histogram of Oriented 4D Normals for Activity Recognition from Depth Sequences // Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA, 2013: 716-723
[24]  Luo J J, Wang W, Qi H R. Spatio-Temporal Feature Extraction and Representation for RGB-D Human Action Recognition. Pattern Recognition Letters, 2014, 50: 139-148
[25]  Gao Z, Zhang H, Liu A A, et al. Human Action Recognition Using Pyramid Histograms of Oriented Gradients and Collaborative Multi-task Learning. KSII Trans on Internet and Information Systems, 2014, 8(2): 483-503
[26]  Yang X D, Tian Y L. EigenJoints-Based Action Recognition Using Nave-Bayes-Nearest-Neighbor // Proc of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Providence, USA, 2012: 14-19
[27]  Li F F, Perona P. A Bayesian Hierarchical Model for Learning Na-tural Scene Categories // Proc of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Providence, USA, 2005, II: 524-531
[28]  Nowak E, Jurie F, Triggs B. Sampling Strategies for Bag-of-Features Image Classification // Proc of the 9th European Conference on Computer Vision. Graz, Austria, 2006, IV: 490-503

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413