Sistem Pengenalan Telapak Tangan pada Interaksi Manusia Robot sebagai Perintah untuk Mobile Robot

  • Panji Setyo Suharso Institut Teknologi Sepuluh November, Indonesia
Keywords: Palm Recognition System, Human-Robot Interaction, Mobile Robot, Image Processing, Non-Verbal Commands

Abstract

Palm recognition system on human-robot interaction as commands for mobile robots is aresearch that aims to develop a system that allows robots to receive commands and instructions non-verbally from humans through palm recognition. Using image recognition and data processing technology, the system aims to create an intuitive interaction between humans and robots, allowing robots to recognize hand gestures and understand commands given. The research includes several stages of development. First, image processing and analysis is carried out to identify and understand the shape and movement of the human palm. Then, the data is linked with the appropriate commands to be communicated to the mobile robot. The training process is carried out to improve the accuracy of the palm recognition system and reduce the likelihood of errors in the interpretation of commands. The results of this study are expected to have a positive impact on the use of mobile robots, especially in complicated or dangerous environments where verbal communication may be limited or impossible. More natural and easy human-robot interaction is expected to improve efficiency and safety in the operation of mobile robots.

 

Downloads

Download data is not yet available.

References

[1] A. Vijayaraj and N. Velmurugan, “LIMITED SPEECHRECOGNITION FOR CONTROLLING MOVEMENT OF MOBILE ROBOT,” Proceedings of International Journal of Engineering Science and Technology, Vol. 2. 10, pp. 5275-5279, 2010.
[2] L. R. Rabiner, “A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,” Proceedings of the IEEE, Vol. 77, No. 2, pp. 257-286, 1989.
[3] Liuhao Ge, Hui Liang, Junsong Yuan, and Daniel Thalmann. Robust 3d hand pose estimation in single depth images: from single-view cnn to multi-view cnns. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3593–3601, 2016. 1
[4] Liuhao Ge, Hui Liang, Junsong Yuan, and Daniel Thalmann.Robust 3d hand pose estimation from single depth images using multi-view cnns. IEEE Transactions on Image Processing, 27(9):4422–4436, 2018.
[5] Liuhao Ge, Zhou Ren, Yuncheng Li, Zehao Xue, Yingying Wang, Jianfei Cai, and Junsong Yuan. 3d hand shape and pose estimation from a single rgb image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 10833–10842, 2019
[6] H. D. Yang, A. Y. Park and S. W. Lee, “Gesture Spotting and Recognition for Human-Robot Interaction,” Proceedings of IEEE Transactions on Robotics, Vol. 23, No. 2, pp. 256-270, 2007.
[7] S. Rossi, E. Leone, M. Fiore, A. Finzi and F. Cutugno, “An Extensible Architecture for Robust Multimodal Human-Robot Communication,” Proceedings of IEEE International Conference on Intelligent Robots and Systems, pp. 2208-2213, 2013.
[8] Yury Kartynnik, Artsiom Ablavatski, Ivan Grishchenko, and Matthias Grundmann. Real-time facial surface geometry from monocular video on mobile gpus. CoRR, abs/1907.06724, 2019.
[9] Tsung-Yi Lin, Priya Goyal, Ross B. Girshick, Kaiming He, and Piotr Dollar. Focal loss for dense object detection. ´ CoRR, abs/1708.02002, 2017
[10] J. Yamato, J. Ohya and K. Ishii, “Recognizing Human Action in Time-Sequential Images using Hidden Markov Model,” in Computer Vision and Pattern Recognition, 1992. Proceedings CVPR'92. IEEE Computer Society Conference on, pp. 379-385, 1992.
[11] Chengde Wan, Thomas Probst, Luc Van Gool, and Angela Yao. Self-supervised 3d hand pose estimation through training by fitting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10853– 10862, 2019.
Published
2023-07-24