全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于元学习和位置信息的小样本学习方法
Few-Shot Learning Method Based on Meta-Learning and Location Information

DOI: 10.12677/JISP.2023.122020, PP. 200-209

Keywords: 小样本学习,元学习,位置信息注意力,长期依赖关系,最近邻分类器
Few-Shot Learning
, Meta Learning, Location Information Attention, Long Term Dependencies, Nearest Neighbor Classifier

Full-Text   Cite this paper   Add to My Lib

Abstract:

随着小样本学习的发展,元学习已经成为一种流行的小样本学习框架,其作用是开发能够快速适应有限数据和低计算成本的小样本分类任务模型。最近有关注意力的研究已经证明了通道注意力对于特征提取效果有一定的提升,但是它忽略了位置信息的作用,位置信息对于在小样本任务中更好地从有限的数据中学习来说很重要。基于这一事实,本文提出一种新的方法,通过在所有基类上预先训练一个加入位置信息注意力的分类器,然后在基于最近质心的小样本分类算法上进行元学习,实现了将位置信息和提取特征有效的结合。通过在两个标准的数据集上实验,和当下主流的小样本图像分类方法相比,该方法在Mini-ImageNet数据集的1-shot与5-shot任务上分别提升1.23%和1.02%,在Tiered-ImageNet数据集上,也分别提升0.85%和0.78%。实验表明该方法有效的发挥了位置信息的作用,可以提升小样本图像分类的准确率。
With the development of few-shot learning, meta-learning has become a popular few-shot learning framework, and its role is to develop models for few-shot classification tasks that can quickly adapt to limited data and low computational cost. Recent studies on attention have demonstrated that channel attention can improve feature extraction to a certain extent, but it ignores the role of location information, which is important for better learning with limited data in few-shot tasks. Based on this fact, this paper proposes a new method to effectively combine location information and extracted features by pre-training a classifier with location information attention on all base classes, and then performing meta-learning on few-shot classification algorithm based on the nearest centroid. Through experiments on two standard datasets, compared with the current mainstream few-shot image classification methods, this method was improved by 1.23% and 1.02% on the 1-shot and 5-shot tasks of the Mini-ImageNet dataset, and also by 0.85% and 0.78% on the Tiered-ImageNet dataset. Experiments show that the method effectively plays the role of location information and can improve the accuracy of few-shot image classification.

References

[1]  Vartak, M., Thiagarajan, A., Miranda, C., et al. (2017) A Meta-Learning Perspective on Cold-Start Recommendations for Items. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, 4-9 December 2017, 6907-6917.
[2]  Altae-Tran, H., Ramsundar, B., Pappu, A.S., et al. (2017) Low Data Drug Discovery with One-Shot Learning. ACS Central Science, 3, 283-293.
https://doi.org/10.1021/acscentsci.6b00367
[3]  Li, F.F., Fergus, R. and Perona, P. (2006) One-Shot Learning of Object Categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28, 594-611.
https://doi.org/10.1109/TPAMI.2006.79
[4]  Vinyals, O., Blundell, C., Lillicrap, T., et al. (2016) Matching Networks for One Shot Learning. Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, 5-10 December 2016, 3637- 3645.
[5]  Hospedales, T., Antoniou, A., Micaelli, P., et al. (2020) Meta-Learning in Neural Networks: A Survey.
[6]  Wang, Y., Yao, Q., Kwok, J.T., et al. (2020) Generalizing from a Few Examples: A Survey on Few-Shot Learning. ACM Computing Surveys, 53, 1-34.
https://doi.org/10.1145/3386252
[7]  Chen, J., Zhan, L.M., Wu, X.M., et al. (2020) Variational Metric Scaling for Metric-Based Meta-Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34, 3478-3485.
https://doi.org/10.1609/aaai.v34i04.5752
[8]  叶萌, 杨娟, 汪荣贵, 薛丽霞, 李懂. 基于特征聚合网络的小样本学习方法[J]. 计算机工程, 2021, 47(3): 77-82.
[9]  Finn, C., Abbeel, P. and Levine, S. (2017) Model-Agnostic Meta-Learning for Fast Adaptation of Deep Network. International Conference on Machine Learning, Sydney, 6-11 August 2017, 1126-1135.
[10]  Garcia, V. and Bruna, J. (2017) Few-Shot Learning with Graph Neural Networks.
[11]  Xing, C., Rostamzadeh, N., Oreshkin, B., et al. (2019) Adaptive Cross-Modal Few-Shot Learning. 33rd Annual Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, 8-14 December 2019, 4847-4857.
[12]  Snell, J., Swersky, K. and Zemel, R. (2017) Prototypical Networks for Few-Shot Learning. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, 4-9 December 2017, 4080-4090.
[13]  Sung, F., Yang, Y., Zhang, L., et al. (2018) Learning to Compare: Relation Network for Few-Shot Learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 1199-1208.
https://doi.org/10.1109/CVPR.2018.00131
[14]  Chen, Y., Wang, X., Liu, Z., et al. (2020) A New Meta-Baseline for Few-Shot Learning.
[15]  Wang, H., Zhu, Y., Green, B., et al. (2020) Axial-deeplab: Stand-Alone Axial-Attention for Panoptic Segmentation. European Conference on Computer Vision, Glasgow, 23-28 August 2020, 108-126.
https://doi.org/10.1007/978-3-030-58548-8_7
[16]  Hu, J., Shen, L. and Sun, G. (2018) Squeeze-and-Excitation Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 7132-7141.
https://doi.org/10.1109/CVPR.2018.00745
[17]  Park, J., Woo, S., Lee, J.Y., et al. (2018) Bam: Bottleneck Attention Module.
[18]  Woo, S., Park, J., Lee, J.Y., et al. (2018) CBAM: Convolutional Block Attention Module. Proceedings of the European Conference on Computer Vision (ECCV), Munich, 8-14 September 2018, 3-19.
https://doi.org/10.1007/978-3-030-01234-2_1
[19]  Zhao, H., Shi, J., Qi, X., et al. (2017) Pyramid Scene Parsing Network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 21-26 July 2017, 2881-2890.
https://doi.org/10.1109/CVPR.2017.660
[20]  Hou, Q., Zhou, D. and Feng, J. (2021) Coordinate Attention for Efficient Mobile Network Design. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, 20-25 June 2021, 13713-13722.
https://doi.org/10.1109/CVPR46437.2021.01350
[21]  Chen, W.Y., Liu, Y.C., Kira, Z., et al. (2019) A Closer Look at Few-Shot Classification.
[22]  Tsotsos, J.K. (1990) Analyzing Vision at the Complexity Level. Behavioral and Brain Sciences, 13, 423-445.
https://doi.org/10.1017/S0140525X00079577
[23]  Hu, J., Shen, L., Albanie, S., et al. (2018) Gather-Excite: Exploiting Feature Context in Convolutional Neural Networks. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, 2-8 December 2018, 9401-9411.
[24]  Hou, Q., Zhang, L., Cheng, M.M., et al. (2020) Strip Pooling: Rethinking Spatial Pooling for Scene Parsing. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, 13-19 June 2020, 4003-4012.
https://doi.org/10.1109/CVPR42600.2020.00406
[25]  Linsley, D., Shiebler, D., Eberhardt, S., et al. (2018) Learning What and Where to Attend.
[26]  Misra, D., Nalamada, T., Arasanipalai, A.U., et al. (2021) Rotate to Attend: Convolutional Triplet Attention Module. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, 3-8 January 2021, 3139-3148.
https://doi.org/10.1109/WACV48630.2021.00318
[27]  Wang, X., Girshick, R., Gupta, A., et al. (2018) Non-Local Neural Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 7794-7803.
https://doi.org/10.1109/CVPR.2018.00813
[28]  Cao, Y., Xu, J., Lin, S., et al. (2019) Gcnet: Non-Local Networks Meet Squeeze-Excitation Networks and Beyond. Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, 27-28 October 2019, 1-10.
https://doi.org/10.1109/ICCVW.2019.00246
[29]  Russakovsky, O., Deng, J., Su, H., et al. (2015) Imagenet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115, 211-252.
https://doi.org/10.1007/s11263-015-0816-y
[30]  Ren, M., Triantafillou, E., Ravi, S., et al. (2018) Meta-Learning for Semi-Supervised Few-Shot Classification.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413