Project: General Hand Gesture Recognition
Overview
This project aims to create a unified, semi-supervised contrastive-learning framework for hand gesture recognition. The framework is designed to adapt efficiently to various downstream tasks, such as human-computer interaction and sign language recognition, with minimal retraining or fine-tuning.
Scope and Applications
[!NOTE] This section is a summary generated from the report by Grok. The contents have been double-checked by the author.
Only this section covers the main content of the report and the remaining sections are about the details of setting up the project and the purpose of specific scripts within the repository.
Key Areas Explored
Static-Pose Representation Learning
- Objective: Map hand landmark inputs (shape ) into feature embeddings (size ).
- Approach: Compared three encoder architectures:
- Multi-layer Perceptron (MLP)
- Graph Convolutional Network (GCN)
- Graph Attention Network (GAT)
- Hypotheses Tested:
- Graph-based models (GCN and GAT), which leverage edge information, outperform MLP in accuracy and convergence speed. This was evaluated using supervised contrastive loss on the Lexset dataset.
- Incorporating a large unlabelled dataset (synthetic MANO data) with curriculum-based augmentations enhances model generalization.
Extension to Dynamic Gesture Recognition
- Objective: Extend the contrastive learning approach to recognize dynamic gestures.
- Approach: Utilize sequential architectures like Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) units to model temporal dependencies in gesture sequences.