全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...
ISRN Robotics  2013 

Classification of Clothing Using Midlevel Layers

DOI: 10.5402/2013/630579

Full-Text   Cite this paper   Add to My Lib

Abstract:

We present a multilayer approach to classify articles of clothing within a pile of laundry. The classification features are composed of color, texture, shape, and edge information from 2D and 3D data within a local and global perspective. The contribution of this paper is a novel approach of classification termed L-M-H, more specifically LC-S-H for clothing classification. The multilayer approach compartmentalizes the problem into a high (H) layer, multiple midlevel (characteristics (C), selection masks (S)) layers, and a low (L) layer. This approach produces “local” solutions to solve the global classification problem. Experiments demonstrate the ability of the system to efficiently classify each article of clothing into one of seven categories (pants, shorts, shirts, socks, dresses, cloths, or jackets). The results presented in this paper show that, on average, the classification rates improve by +27.47% for three categories (Willimon et al., 2011), +17.90% for four categories, and +10.35% for seven categories over the baseline system, using SVMs (Chang and Lin, 2001). 1. Introduction Sorting laundry is a common routine that involves classifying and labeling each piece of clothing. This particular task is not close to becoming an automated procedure. The laundry process consists of several steps: handling, washing, drying, separating/isolating, classifying, unfolding/flattening, folding, and putting it away into a predetermined drawer or storage unit. Figure 1 gives a high-level flow chart of these various steps. In the past, several bodies of work have attempted at solving the tasks of handling [1–8], separating/isolating [8–12], classifying [6, 9, 11–15], unfolding/flattening [14, 16], and folding [17] clothes. Figure 1 gives a flow chart of the various areas. Figure 1: Overview of the laundry process, adapted from [ 10]. Green areas represent parts of the process that have already been explored in previous work, while the Red area represents the part of the process that is the focus of this paper. A robotic classification system is designed to accurately sort a pile of clothes in predefined categories, before and after the washing/drying process. Laundry is normally sorted by individual, then by category. Our procedure allows for clothing to be classified/sorted by category, age, gender, color (i.e., whites, colors, darks), or season of use. The problem that we address in this paper is grouping isolated articles of clothing into a specified category (e.g., shirts, pants, shorts, cloths, socks, dresses, jackets) using midlevel layers (i.e., physical

References

[1]  S. Hata, T. Hiroyasu, J. Hayash, H. Hojoh, and T. Hamada, “Flexible handling robot system for cloth,” in Proceedings of the IEEE International Conference on Mechatronics and Automation (ICMA '09), pp. 49–54, August 2009.
[2]  Y. Yoshida, J. Hayashi, S. Hata, H. Hojoh, and T. Hamada, “Status estimation of cloth handling robot using force sensor,” in Proceedings of the IEEE International Symposium on Industrial Electronics (ISIE '09), pp. 339–343, July 2009.
[3]  K. Salleh, H. Seki, Y. Kamiya, and M. Hikizu, “Tracing manipulation in clothes spreading by robot arms,” Journal of Robotics and Mechatronics, vol. 18, no. 5, pp. 564–571, 2006.
[4]  Y. Kita and N. Kita, “A model-driven method of estimating the state of clothes for manipulating it,” in Proceedings of the 6th Workshop on Applications of Computer Vision, pp. 63–69, 2002.
[5]  Y. Kita, F. Saito, and N. Kita, “A deformable model driven visual method for handling clothes,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 3889–3895, May 2004.
[6]  Y. Kita, T. Ueshiba, E. Neo, and N. Kita, “A method for handling a specific part of clothing by dual arms,” in Proceedings of the Conference on Intelligent Robots and Systems (IROS '09), pp. 3403–3408, 2009.
[7]  P. Gibbons, P. Culverhouse, and G. Bugmann, “Visual identification of grasp locations on clothing for a personal robot,” in Proceedings of the Conference Towards Autonomous Robotics Systems (TAROS '09), pp. 78–81, August 2009.
[8]  H. Kobayashi, S. Hata, H. Hojoh, T. Hamada, and H. Kawai, “A study on handling system for cloth using 3-D vision sensor,” in Proceedings of the 34th Annual Conference of the IEEE Industrial Electronics Society (IECON '08), pp. 3403–3408, November 2008.
[9]  B. Willimon, S. Birchfield, and I. Walker, “Classification of clothing using interactive perception,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '11), pp. 1862–1868, 2011.
[10]  M. Kaneko and M. Kakikura, “Planning strategy for unfolding task of clothes—isolation of clothes from washed mass,” in Proceedings of the 13th annual conference of the Robotics Society of Japan (RSJ '96), pp. 455–456, 1996.
[11]  M. Kaneko and M. Kakikura, “Planning strategy for putting away laundry—isolating and unfolding task,” in Proceedings of the IEEE International Symposium on Assembly and Task Planning (ISATP '01), pp. 429–434, May 2001.
[12]  F. Osawa, H. Seki, and Y. Kamiya, “Unfolding of massive laundry and classification types by dual manipulator,” Journal of Advanced Computational Intelligence and Intelligent Informatics, vol. 11, no. 5, pp. 457–463.
[13]  Y. Kita, T. Ueshiba, E. S. Neo, and N. Kita, “Clothes state recognition using 3d observed data,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '09), pp. 1220–1225, May 2009.
[14]  M. Cusumano-Towner, A. Singh, S. Miller, J. F. O’Brien, and P. Abbeel, “Bringing clothing into desired configurations with limited perception,” in Proceedings of the International Conference on Robotics and Automation, May 2011.
[15]  B. Willimon, I. Walker, and S. Birchfield, “A new approach to clothing classification using mid-level layers,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '13), 2013.
[16]  B. Willimon, S. Birchfield, and I. Walker, “Model for unfolding laundry using interactive perception,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '11), 2011.
[17]  S. Miller, M. Fritz, T. Darrell, and P. Abbeel, “Parametrized shape models for clothing,” in Proceedings of the International Conference on Robotics and Automation, pp. 4861–4868, May 2011.
[18]  D. Katz and O. Brock, “Manipulating articulated objects with interactive perception,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '08), pp. 272–277, May 2008.
[19]  J. Kenney, T. Buckley, and O. Brock, “Interactive segmentation for manipulation in unstructured environments,” in Proceedings of the International Conference on Robotics and Automation (ICRA '09), pp. 1377–1382, 2009.
[20]  P. Fitzpatrick, “First contact: an active vision approach to segmentation,” in Proceedings of the International Conference on Intelligent Robots and Systems (IROS '03), pp. 2161–2166, 2003.
[21]  B. Willimon, S. Birchfield, and I. Walker, “Rigid and non-rigid classification using interactive perception,” in Proceedings of the 23rd IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '10), pp. 1728–1733, October 2010.
[22]  R. B. Willimon, Interactive perception for cluttered environments [M.S. thesis], Clemson University, 2009.
[23]  C. C. Chang and C. J. Lin, LIBSVM: A Library for Support Vector Machines, 2001.
[24]  H. M. Wallach, “Topic modeling: beyond bag-of-words,” in Proceedings of the 23rd International Conference on Machine Learning (ICML '06), pp. 977–984, June 2006.
[25]  A. Ramisa, G. Alenyá, F. Moreno-Noguer, and C. Torras, “Using depth and appearance features for informed robot grasping of highly wrinkled clothes,” in Proceedings of the International Conference on Robotics and Automation, pp. 1703–1708, 2012.
[26]  P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graph-based image segmentation,” International Journal of Computer Vision, vol. 59, no. 2, pp. 167–181, 2004.
[27]  J. F. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679–698, 1986.
[28]  K. Kimura, S. Kikuchi, and S. Yamasaki, “Accurate root length measurement by image analysis,” Plant and Soil, vol. 216, no. 1, pp. 117–127, 1999.
[29]  R. B. Rusu, G. Bradski, R. Thibaux, and J. Hsu, “Fast 3D recognition and pose using the viewpoint feature histogram,” in Proceedings of the 23rd IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '10), pp. 2155–2162, October 2010.
[30]  D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
[31]  R. Rusu, N. Blodow, and M. Beetz, “Fast point feature histograms (FPFH) for 3D registration,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '09), Intelligent Autonomous Systems IAS, May 2009.
[32]  A. Gidudu, G. Hulley, and T. Marwala, “Image classification using SVMs: one-against-one vs one-against-all,” in Proccedings of the 28th Asian Conference on Remote Sensing, 2007.
[33]  J. Milgram, M. Cheriet, and R. Sabourin, “One against one or one against all: which one is better for handwriting recognition with SVMs?” in Proceedings of the 10th International Workshop on Frontiers in Handwriting Recognition, October 2006.
[34]  R. Rifkin and A. Klautau, “In defense of one-vs-all classification,” Journal of Machine Learning Research, vol. 5, pp. 101–141, 2004.
[35]  K. bo Duan and S. S. Keerthi, “Which is the best multiclass SVM method? an empirical study,” in Proceedings of the 6th International Workshop on Multiple Classifier Systems, pp. 278–285, 2005.
[36]  P. A. Devijver and J. Kittler, Pattern Recognition: A Statistical Approach, Prentice Hall, London, UK, 1982.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413