Skip to main content

One post tagged with "Contrastive Learning"

View All Tags

Project: General Hand Gesture Recognition

· 3 min read

View Project Report

PyTorch Computer Vision Contrastive Learning Duration

Overview

This project aims to create a unified, semi-supervised contrastive-learning framework for hand gesture recognition. The framework is designed to adapt efficiently to various downstream tasks, such as human-computer interaction and sign language recognition, with minimal retraining or fine-tuning.

Scope and Applications

[!NOTE] This section is a summary generated from the report by Grok. The contents have been double-checked by the author.

Only this section covers the main content of the report and the remaining sections are about the details of setting up the project and the purpose of specific scripts within the repository.

Key Areas Explored

Static-Pose Representation Learning

  • Objective: Map hand landmark inputs (shape 21×321 \times 3) into feature embeddings (size 128128).
  • Approach: Compared three encoder architectures:
    • Multi-layer Perceptron (MLP)
    • Graph Convolutional Network (GCN)
    • Graph Attention Network (GAT)
  • Hypotheses Tested:
    1. Graph-based models (GCN and GAT), which leverage edge information, outperform MLP in accuracy and convergence speed. This was evaluated using supervised contrastive loss on the Lexset dataset.
    2. Incorporating a large unlabelled dataset (synthetic MANO data) with curriculum-based augmentations enhances model generalization.

Extension to Dynamic Gesture Recognition

  • Objective: Extend the contrastive learning approach to recognize dynamic gestures.
  • Approach: Utilize sequential architectures like Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) units to model temporal dependencies in gesture sequences.