Unified learning approach for egocentric hand gesture recognition and fingertip detection

Overview

Unified Gesture Recognition and Fingertip Detection

A unified convolutional neural network (CNN) algorithm for both hand gesture recognition and fingertip detection at the same time. The proposed algorithm uses a single network to predict both finger class probabilities for classification and fingertips positional output for regression in one evaluation. From the finger class probabilities, the gesture is recognized, and using both of the information fingertips are localized. Instead of directly regressing the fingertips position from the fully connected (FC) layer of the CNN, we regress the ensemble of fingertips position from a fully convolutional network (FCN) and subsequently take ensemble average to regress the final fingertips positional output.

Update

Included robust real-time hand detection using yolo for better smooth performance in the first stage of the detection system and most of the code has been cleaned and restructured for ease of use. To get the previous versions, please visit the release section.

GitHub stars GitHub forks GitHub issues Version GitHub license

Requirements

  • TensorFlow-GPU==2.2.0
  • OpenCV==4.2.0
  • ImgAug==0.2.6
  • Weights: Download the pre-trained weights files of the unified gesture recognition and fingertip detection model and put the weights folder in the working directory.

Downloads Downloads

The weights folder contains three weights files. The fingertip.h5 is for unified gesture recignition and finertiop detection. yolo.h5 and solo.h5 are for the yolo and solo method of hand detection. (what is solo?)

Paper

Paper Paper

To get more information about the proposed method and experiments, please go through the paper. Cite the paper as:

@article{alam2021unified,
title = {Unified learning approach for egocentric hand gesture recognition and fingertip detection},
author={Alam, Mohammad Mahmudul and Islam, Mohammad Tariqul and Rahman, SM Mahbubur},
journal = {Pattern Recognition},
volume = {121},
pages = {108200},
year = {2021},
publisher={Elsevier},
}

Dataset

The proposed gesture recognition and fingertip detection model is trained by employing Scut-Ego-Gesture Dataset which has a total of eleven different single hand gesture datasets. Among the eleven different gesture datasets, eight of them are considered for experimentation. A detailed explanation about the partition of the dataset along with the list of the images used in the training, validation, and the test set is provided in the dataset/ folder.

Network Architecture

To implement the algorithm, the following network architecture is proposed where a single CNN is utilized for both hand gesture recognition and fingertip detection.

Prediction

To get the prediction on a single image run the predict.py file. It will run the prediction in the sample image stored in the data/ folder. Here is the output for the sample.jpg image.

Real-Time!

To run in real-time simply clone the repository and download the weights file and then run the real-time.py file.

directory > python real-time.py

In real-time execution, there are two stages. In the first stage, the hand can be detected by using either you only look once (yolo) or single object localization (solo) algorithm. By default, yolo will be used here. The detected hand portion is then cropped and fed to the second stage for gesture recognition and fingertip detection.

Output

Here is the output of the unified gesture recognition and fingertip detection model for all of the 8 classes of the dataset where not only each fingertip is detected but also each finger is classified.

Comments
  • Datasets

    Datasets

    Hello, I have a question about the dataset from your readme, I can't download the Scut-Ego-Gesture Dataset ,Because in China, this website has been banned. Can you share it with me in other ways? For example, Google or QQ email: [email protected]

    opened by CVUsers 10
  • how to download the weights, code not contain?

    how to download the weights, code not contain?

    The weights folder contains three weights files. The comparison.h5 is for first five classes and performance.h5 is for first eight classes. solo.h5 is for hand detection. but no link

    opened by mmxuan18 6
  • OSError: Unable to open file (unable to open file: name = 'yolo.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    OSError: Unable to open file (unable to open file: name = 'yolo.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    I use the Mac Os to run thereal-time.py file, and get the OSError, I also search on Google to find others' the same problem. It is probably the Keras problem. But I do not how to solve it

    opened by Hanswanglin 4
  • OSError: Unable to open file (unable to open file: name = 'weights/performance.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    OSError: Unable to open file (unable to open file: name = 'weights/performance.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py/h5f.pyx", line 88, in h5py.h5f.open OSError: Unable to open file (unable to open file: name = 'weights/performance.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    opened by Jasonmes 2
  • left hand?

    left hand?

    Hi, first it's really cool work!

    Is the left hand included in the training images? I have been playing around with some of my own images and it seems that it doesn't really recognize the left hand in a palm-down position...

    If I want to include the left hand, do you think it would be possible if I train the network with the image flipped?

    opened by myhjiang 1
  • why are there two hand detection provided?

    why are there two hand detection provided?

    A wonderful work!!As mentioned above, the Yolo and Solo detection models are provided. I wonder what is the advatange of each model comparing to the other and what is the dataset to train the detect.

    opened by DanielMao2015 1
  • Difference of classes5.h5 and classes8.h5

    Difference of classes5.h5 and classes8.h5

    Hi, May i know the difference when training classes5 and classes8? are the difference from the dataset used for training by excluding SingleSix, SingleSeven, SingleEight or there are other modification such as changing the model structure or parameters?

    Thanks

    opened by danieltanimanuel 1
  • Using old versions of tensorflow, can't install the dependencies on my macbook and with newer versions it's constatly failing.

    Using old versions of tensorflow, can't install the dependencies on my macbook and with newer versions it's constatly failing.

    When trying to install the required version of tensorflow:

    pip3 install tensorflow==1.15.0
    ERROR: Could not find a version that satisfies the requirement tensorflow==1.15.0 (from versions: 2.2.0rc3, 2.2.0rc4, 2.2.0, 2.2.1, 2.2.2, 2.3.0rc0, 2.3.0rc1, 2.3.0rc2, 2.3.0, 2.3.1, 2.3.2, 2.4.0rc0, 2.4.0rc1, 2.4.0rc2, 2.4.0rc3, 2.4.0rc4, 2.4.0, 2.4.1)
    ERROR: No matching distribution found for tensorflow==1.15.0
    

    I even tried downloading the .whl file from the pypi and try manually installing it, but that didn't work too:

    pip3 install ~/Downloads/tensorflow-1.15.0-cp37-cp37m-macosx_10_11_x86_64.whl
    ERROR: tensorflow-1.15.0-cp37-cp37m-macosx_10_11_x86_64.whl is not a supported wheel on this platform.
    

    Tried with both python3.6 and python3.8

    So it would be great to update the dependencies :)

    opened by KoStard 1
  • Custom Model keyword arguments Error

    Custom Model keyword arguments Error

    Change model = Model(input=model.input, outputs=[probability, position]) to model = Model(inputs=model.input, outputs=[probability, position]) on line 22 of net/network.py

    opened by Rohit-Jain-2801 1
  • Problem of weights

    Problem of weights

    Hi,when load the solo.h5(In solo.py line 14:"self.model.load_weights(weights)") it will report errors: Process finished with exit code -1073741819 (0xC0000005) keras2.2.5+tensorflow1.14.0+cuda10.0

    opened by MC-E 1
Releases(v2.0)
Owner
Mohammad
Machine Learning | Graduate Research Assistant at CORAL Lab
Mohammad
PyTorch experiments with the Zalando fashion-mnist dataset

zalando-pytorch PyTorch experiments with the Zalando fashion-mnist dataset Project Organization ├── LICENSE ├── Makefile - Makefile with co

Federico Baldassarre 31 Sep 25, 2021
🏎️ Accelerate training and inference of 🤗 Transformers with easy to use hardware optimization tools

Hugging Face Optimum 🤗 Optimum is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to t

Hugging Face 842 Dec 30, 2022
Deep Markov Factor Analysis (NeurIPS2021)

Deep Markov Factor Analysis (DMFA) Codes and experiments for deep Markov factor analysis (DMFA) model accepted for publication at NeurIPS2021: A. Farn

Sarah Ostadabbas 2 Dec 16, 2022
PaSST: Efficient Training of Audio Transformers with Patchout

PaSST: Efficient Training of Audio Transformers with Patchout This is the implementation for Efficient Training of Audio Transformers with Patchout Pa

165 Dec 26, 2022
DeLiGAN - This project is an implementation of the Generative Adversarial Network

This project is an implementation of the Generative Adversarial Network proposed in our CVPR 2017 paper - DeLiGAN : Generative Adversarial Net

Video Analytics Lab -- IISc 110 Sep 13, 2022
Steer OpenAI's Jukebox with Music Taggers

TagBox Steer OpenAI's Jukebox with Music Taggers! The closest thing we have to VQGAN+CLIP for music! Unsupervised Source Separation By Steering Pretra

Ethan Manilow 34 Nov 02, 2022
Codes for TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization.

TS-CAM: Token Semantic Coupled Attention Map for Weakly SupervisedObject Localization This is the official implementaion of paper TS-CAM: Token Semant

vasgaowei 112 Jan 02, 2023
Cmsc11 arcade - Final Project for CMSC11

cmsc11_arcade Final Project for CMSC11 Developers: Limson, Mark Vincent Peñafiel

Gregory 1 Jan 18, 2022
Named Entity Recognition with Small Strongly Labeled and Large Weakly Labeled Data

Named Entity Recognition with Small Strongly Labeled and Large Weakly Labeled Data arXiv This is the code base for weakly supervised NER. We provide a

Amazon 92 Jan 04, 2023
A general-purpose, flexible, and easy-to-use simulator alongside an OpenAI Gym trading environment for MetaTrader 5 trading platform (Approved by OpenAI Gym)

gym-mtsim: OpenAI Gym - MetaTrader 5 Simulator MtSim is a simulator for the MetaTrader 5 trading platform alongside an OpenAI Gym environment for rein

Mohammad Amin Haghpanah 184 Dec 31, 2022
Fast Differentiable Matrix Sqrt Root

Fast Differentiable Matrix Sqrt Root Geometric Interpretation of Matrix Square Root and Inverse Square Root This repository constains the official Pyt

YueSong 42 Dec 30, 2022
A Real-Time-Strategy game for Deep Learning research

Description DeepRTS is a high-performance Real-TIme strategy game for Reinforcement Learning research. It is written in C++ for performance, but provi

Centre for Artificial Intelligence Research (CAIR) 156 Dec 19, 2022
EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos.

EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos. In this project, we provide the basic code for fitt

ZJU3DV 2.2k Jan 05, 2023
Experiments for Neural Flows paper

Neural Flows: Efficient Alternative to Neural ODEs [arxiv] TL;DR: We directly model the neural ODE solutions with neural flows, which is much faster a

54 Dec 07, 2022
Boundary IoU API (Beta version)

Boundary IoU API (Beta version) Bowen Cheng, Ross Girshick, Piotr Dollár, Alexander C. Berg, Alexander Kirillov [arXiv] [Project] [BibTeX] This API is

Bowen Cheng 177 Dec 29, 2022
competitions-v2

Codabench (formerly Codalab Competitions v2) Installation $ cp .env_sample .env $ docker-compose up -d $ docker-compose exec django ./manage.py migrat

CodaLab 21 Dec 02, 2022
PyTorch implementation of Histogram Layers from DeepHist: Differentiable Joint and Color Histogram Layers for Image-to-Image Translation

deep-hist PyTorch implementation of Histogram Layers from DeepHist: Differentiable Joint and Color Histogram Layers for Image-to-Image Translation PyT

Winfried Lötzsch 10 Dec 06, 2022
CLIP: Connecting Text and Image (Learning Transferable Visual Models From Natural Language Supervision)

CLIP (Contrastive Language–Image Pre-training) Experiments (Evaluation) Model Dataset Acc (%) ViT-B/32 (Paper) CIFAR100 65.1 ViT-B/32 (Our) CIFAR100 6

Myeongjun Kim 52 Jan 07, 2023
Hippocampal segmentation using the UNet network for each axis

Hipposeg Hippocampal segmentation using the UNet network for each axis, inspired by https://github.com/MICLab-Unicamp/e2dhipseg Red: False Positive Gr

Juan Carlos Aguirre Arango 0 Sep 02, 2021
Face recognition system using MTCNN, FACENET, SVM and FAST API to track participants of Big Brother Brasil in real time.

BBB Face Recognizer Face recognition system using MTCNN, FACENET, SVM and FAST API to track participants of Big Brother Brasil in real time. Instalati

Rafael Azevedo 232 Dec 24, 2022