IMPROVING HAND POSTURE RECOGNITION PERFORMANCE USING MULTI-MODALITIES

  • Doan Huong Giang
Keywords: Electronic Home Appliances, Deep Learning, Machine Learning, Hand Poseture/Gesture Recognition, Human Machine Interaction, Multi-modalities, Late Fusion, Early Fusion.

Abstract

Hand gesture recognition has been researched for a long time. However, performance of such methods in practical application still has to face with many challenges due to the variation of hand pose, hand shape, viewpoints, complex background, light illumination or subject style. In this work, we deeply  investigate hand representations on various extractors from independent data (such as RGB image and Depth image). To this end, we adopt an concatenate features from different modalities to obtain very competitive accuracy. To evaluate the robustness of the method, two datasets are used: The first one, a self-captured dataset that composes of six hand gestures in indoor environment with complex background. The second one, a published dataset which has 10 hand gestures. Experiments with RGB and/or Depth images on two datasets show that combination of information flows has strong impact on recognition results. Additionally, the CNN method's performances are mostly increased by multi-features combination of which results are compared with hand-craft-based feature extractors, respectively. The proposed method suggests a feasible and robust solution addressing technical issues in developing HCI application using the hand posture recognition.

điểm /   đánh giá
Published
2021-11-12
Section
Bài viết