Efficient face emotion recognition in photos and videos

Overview

This repository contains code of face emotion recognition that was developed in the RSF (Russian Science Foundation) project no. 20-71-10010 (Efficient audiovisual analysis of dynamical changes in emotional state based on information-theoretic approach).

Our approach is described in the arXiv paper published at IEEE SISY 2021. The extended version of this paper is under considereation in the international journal.

All the models were pre-trained for face identification task using VGGFace2 dataset. In order to train PyTorch models, SAM code was borrowed.

We upload several models that obtained the state-of-the-art results for AffectNet dataset. The facial features extracted by these models lead to the state-of-the-art accuracy of face-only models on video datasets from EmotiW 2019, 2020 challenges: AFEW (Acted Facial Expression In The Wild), VGAF (Video level Group AFfect) and EngageWild.

Here are the accuracies measure on the testing set of above-mentioned datasets:

Model AffectNet (8 classes), original AffectNet (8 classes), aligned AffectNet (7 classes), original AffectNet (7 classes), aligned AFEW VGAF
mobilenet_7.h5 - - 64.71 - 55.35 68.92
enet_b0_8_best_afew.pt 60.95 60.18 64.63 64.54 59.89 66.80
enet_b0_8_best_vgaf.pt 61.32 61.03 64.57 64.89 55.14 68.29
enet_b0_7.pt - - 65.74 65.74 56.99 65.18
enet_b2_8.pt 63.025 62.40 66.29 - 57.78 70.23
enet_b2_7.pt - - 65.91 66.34 59.63 69.84

Please note, that we report the accuracies for AFEW and VGAFonly on the subsets, in which MTCNN detects facial regions. The code contains also computation of overall accuracy on the complete testing set, which is slightly lower due to the absence of faces or failed face detection.

In order to run our code on the datasets, please prepare them firstly using our TensorFlow notebooks: train_emotions.ipynb, AFEW_train.ipynb and VGAF_train.ipynb.

If you want to run our mobile application, please, run the following scripts inside mobile_app folder:

python to_tflite.py
python to_pytorchlite.py

Please be sure that EfficientNet models for PyTorch are based on old timm 0.4.5 package, so that exactly tis version should be installed by the following command:

pip install timm==0.4.5
Comments
  • can you share your Manually_Annotated_file cvs files?

    can you share your Manually_Annotated_file cvs files?

    I test affectnet validation data, but get 0.5965 using enet_b2_8.pt. can you share Manually_Annotated_file validation.csv and training.csv to me for debug?

    opened by Dian-Yi 10
  • affectnet march2021 version training script update

    affectnet march2021 version training script update

    As mentioned in #14 , we have different version of affectnet versions. I updated pytorch training script for AffectNet march2021. Two notes are

    • I used horizontal flip for training augmentation,
    • and we have different emotion order in logit.
    opened by sunggukcha 6
  • Confidence range for inference using python library

    Confidence range for inference using python library

    Hi,

    First of all, thank you so much for such a convenient setup to use!

    I'm using the python library face emotion in my code with the model_name = 'enet_b0_8_best_afew'. I was wondering what is the range of the confidence returned by the library or this model in particular. I wasn't able to figure that out.

    Thank you

    opened by varunsingh3000 4
  • Preprocessing of images to run inference

    Preprocessing of images to run inference

    Hello, thank you very much for your work.

    I am trying to preprocess a batch of images (I have my own dataset) the way you prepared your data. I'm following the notebook train_emotions.ipynb as it is in Tensforflow and I'm using that framework.

    I have a question about the steps of the preprocessing, so I would like to ask you if you can tell me the correct steps. These are the steps I'm following, let me know if I'm right or if something is missing:

    1. I already have my images with the faces detected and croppped, i.e, I have a dataset full of faces like this frame9

    2. img = cv2.imread(img_path)

    3. img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

    4. img = cv2.resize(img,(224,224))

    5. Then your notebook shows you make a normalization def mobilenet_preprocess_input(x,**kwargs): x[..., 0] -= 103.939 x[..., 1] -= 116.779 x[..., 2] -= 123.68 return x preprocessing_function=mobilenet_preprocess_input

    Here I am having an issue because I cannot cast the subtraction operation between an integer and a float, so I changed it to

    def mobilenet_preprocess_input(x,**kwargs): x[..., 0] = x[..., 0] - 103.939 x[..., 1] = x[..., 1] - 116.779 x[..., 2] = x[..., 2] - 123.68 return x preprocessing_function=mobilenet_preprocess_input

    So, let me know if the process I'm following is correct or if there's something missing.

    Thank you!

    opened by isa-tr 4
  • AttributeError: 'SqueezeExcite' object has no attribute 'gate'

    AttributeError: 'SqueezeExcite' object has no attribute 'gate'

    Excuse me, this problem occurs when using the ‘enet_b2_7.pt’ model to test. I completed it according to the steps you gave, but I really couldn't find the reason for this problem. Do you have any suggestions?

    opened by evercy 4
  • Age gender ethinicity model giving same output for different results

    Age gender ethinicity model giving same output for different results

    `class CNN(object):

    def __init__(self, model_filepath):
    
        self.model_filepath = model_filepath
        self.load_graph(model_filepath = self.model_filepath)
    
    def load_graph(self, model_filepath):
        print('Loading model...')
        self.graph = tf.Graph()
        self.sess = tf.compat.v1.InteractiveSession(graph = self.graph)
    
        with tf.compat.v1.gfile.GFile(model_filepath, 'rb') as f:
            graph_def = tf.compat.v1.GraphDef()
            graph_def.ParseFromString(f.read())
    
        print('Check out the input placeholders:')
        nodes = [n.name + ' => ' +  n.op for n in graph_def.node if n.op in ('Placeholder')]
        for node in nodes:
            print(node)
    
        # Define input tensor
        self.input = tf.compat.v1.placeholder(np.float32, shape = [None, 224, 224, 3], name='input')
        # self.dropout_rate = tf.placeholder(tf.float32, shape = [], name = 'dropout_rate')
    
        tf.import_graph_def(graph_def, {'input_1': self.input})
    
        print('Model loading complete!')
    
        
        # Get layer names
        layers = [op.name for op in self.graph.get_operations()]
        for layer in layers:
            print(layer)
    
    def test(self, data):
    
        # Know your output node name
        output_tensor1,output_tensor2 ,output_tensor3  = self.graph.get_tensor_by_name('import/age_pred/Softmax: 0'),self.graph.get_tensor_by_name('import/gender_pred/Sigmoid: 0'),self.graph.get_tensor_by_name('import/ethnicity_pred/Softmax: 0')
        output = self.sess.run([output_tensor1,output_tensor2 ,output_tensor3], feed_dict = {self.input: data})
    
        return output`
    

    Using this code load "age_gender_ethnicity_224_deep-03-0.13-0.97-0.88.pb" and predict on it. But when predicting on images, every time I am getting same output array.

    [array([[0.01319346, 0.00229602, 0.00176407, 0.00270929, 0.01408699, 0.00574261, 0.00756087, 0.01012164, 0.01221055, 0.01821703, 0.01120028, 0.00936489, 0.01003029, 0.00912451, 0.00813381, 0.00894791, 0.01277262, 0.01034999, 0.01053109, 0.0133063 , 0.01423471, 0.01610439, 0.01528896, 0.01825454, 0.01722076, 0.01933933, 0.01908059, 0.01899827, 0.01919533, 0.0278129 , 0.02204996, 0.02146631, 0.02125309, 0.02146868, 0.02230236, 0.02054285, 0.02096066, 0.01976574, 0.01990371, 0.02064857, 0.01843528, 0.01697922, 0.01610838, 0.01458549, 0.01581902, 0.01377539, 0.01298613, 0.01378927, 0.01191105, 0.01335083, 0.01154454, 0.01118198, 0.01019558, 0.01038121, 0.00920709, 0.00902615, 0.00936321, 0.00969135, 0.00867239, 0.00838663, 0.00797724, 0.00756043, 0.00890809, 0.00758041, 0.00743711, 0.00584346, 0.00555749, 0.00639214, 0.0061864 , 0.00784793, 0.00532241, 0.00567684, 0.00481544, 0.0052173 , 0.00513186, 0.00394571, 0.00415856, 0.00384584, 0.00452774, 0.0041736 , 0.00328163, 0.00327138, 0.00297012, 0.00369216, 0.00284221, 0.00255897, 0.00285459, 0.00232105, 0.00228869, 0.00218005, 0.0021927 , 0.00236659, 0.00233843, 0.00204793, 0.00209861, 0.00231407, 0.00145706, 0.00179674, 0.00186183, 0.00221309]], dtype=float32), array([[0.62949586]], dtype=float32), array([[0.21338916, 0.19771543, 0.19809113, 0.19525865, 0.19554558]], dtype=float32)] Is there something am missing or is this .pb file not meant for predicting?

    opened by sneakatyou 4
  • Provide the validation script/notebook.

    Provide the validation script/notebook.

    Hi,

    I am fond of your works and paper, but I can not find any validation script to validate your result, especially the highest result with efficientNetB2-8 classes-EffectNet.

    Or could you please provide a separate script to pre-process the input images then we can validate the provided weights on your GitHub repository?

    Thank you,

    opened by ltkhang 4
  • A few suggestions.

    A few suggestions.

    Hello!

    I have a couple of ideas:

    1. Could you, please, add text description about difference between models, especially between b0 and b2 general types?
    2. Please consider adding hsemotion-onnx package to the pip repository.
    opened by ioctl-user 3
  • Can not load pretrained models

    Can not load pretrained models

     File "/Users/xxx/Library/Python/3.8/lib/python/site-packages/timm/models/efficientnet_blocks.py", line 47, in forward
        return x * self.gate(x_se)
      File "/Users/xxx/Library/Python/3.8/lib/python/site-packages/torch/nn/modules/module.py", line 947, in __getattr__
        raise AttributeError("'{}' object has no attribute '{}'".format(
    AttributeError: 'SqueezeExcite' object has no attribute 'gate'
    
    opened by DefTruth 3
  • A error when runing codes.

    A error when runing codes.

    When runing AFEW_train.ipynb, an error occured:

    could not broadcast input array from shape (0,112,3) into shape (60,112,3) at facial_anylysis.py line 274 : tmp[dy[k]-1:edy[k],dx[k]-1:edx[k],:] = img[y[k]-1:ey[k],x[k]-1:ex[k],:]

    why dose this occured? could you please fixed it?

    opened by kiva12138 3
  • Valence and arousal

    Valence and arousal

    Hello again! I've read your paper and I've seen that you use the circumplex model's variables arousal and valence. How do those variable appears in the code? I can't find them :( Thank you, Amaia

    opened by AmaiaBiomedicalEngineer 2
  • Question about this work.

    Question about this work.

    Dear Andrey Savchenko,

    I'm a student and going to build a small system to detect student's emotions for my thesis. After finding a solution, I found your job. But I can't run https://github.com/HSE-asavchenko/face-emotion-recognition/blob/main/src/affectnet/train_emotions.ipynb by current AFFECT dataset's version. Please correct me if I'm wrong. My question is: Can I run this workhttps://github.com/HSE-asavchenko/face-emotion-recognition/blob/main/src/affectnet/train_affectnet_march2021_pytorch.ipynb with MobileNet. Because I tend to build small applications to detect emotions from client site then send result to server.

    Many thanks,

    Son Nguyen.

    opened by sonnguyen1996 2
Releases(v0.2.1)
Owner
Andrey Savchenko
Andrey Savchenko
3D-Transformer: Molecular Representation with Transformer in 3D Space

3D-Transformer: Molecular Representation with Transformer in 3D Space

55 Dec 19, 2022
A modified version of DeepMind's Alphafold2 to divide CPU part (MSA and template searching) and GPU part (prediction model)

ParallelFold Author: Bozitao Zhong This is a modified version of DeepMind's Alphafold2 to divide CPU part (MSA and template searching) and GPU part (p

Bozitao Zhong 77 Dec 22, 2022
Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks (MAPDN)

Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks (MAPDN) This is the implementation of the paper Multi-Age

Future Power Networks 83 Jan 06, 2023
Efficient semidefinite bounds for multi-label discrete graphical models.

Low rank solvers #################################### benchmark/ : folder with the random instances used in the paper. ############################

1 Dec 08, 2022
Image De-raining Using a Conditional Generative Adversarial Network

Image De-raining Using a Conditional Generative Adversarial Network [Paper Link] [Project Page] He Zhang, Vishwanath Sindagi, Vishal M. Patel In this

He Zhang 216 Dec 18, 2022
Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network"

M3D-VTON: A Monocular-to-3D Virtual Try-On Network Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network" Paper | Suppl

109 Dec 29, 2022
Prediction of MBA refinance Index (Mortgage prepayment)

Prediction of MBA refinance Index (Mortgage prepayment) Deep Neural Network based Model The ability to predict mortgage prepayment is of critical use

Ruchil Barya 1 Jan 16, 2022
Source code release of the paper: Knowledge-Guided Deep Fractal Neural Networks for Human Pose Estimation.

GNet-pose Project Page: http://guanghan.info/projects/guided-fractal/ UPDATE 9/27/2018: Prototxts and model that achieved 93.9Pck on LSP dataset. http

Guanghan Ning 83 Nov 21, 2022
Self-Supervised Image Denoising via Iterative Data Refinement

Self-Supervised Image Denoising via Iterative Data Refinement Yi Zhang1, Dasong Li1, Ka Lung Law2, Xiaogang Wang1, Hongwei Qin2, Hongsheng Li1 1CUHK-S

Zhang Yi 72 Jan 01, 2023
Fast and Simple Neural Vocoder, the Multiband RNNMS

Multiband RNN_MS Fast and Simple vocoder, Multiband RNN_MS. Demo Quick training How to Use System Details Results References Demo ToDO: Link super gre

tarepan 5 Jan 11, 2022
StyleGAN-Human: A Data-Centric Odyssey of Human Generation

StyleGAN-Human: A Data-Centric Odyssey of Human Generation Abstract: Unconditional human image generation is an important task in vision and graphics,

stylegan-human 762 Jan 08, 2023
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
This code uses generative adversarial networks to generate diverse task allocation plans for Multi-agent teams.

Mutli-agent task allocation This code uses generative adversarial networks to generate diverse task allocation plans for Multi-agent teams. To change

Biorobotics Lab 5 Oct 12, 2022
Setup freqtrade/freqUI on Heroku

UNMAINTAINED - REPO MOVED TO https://github.com/p-zombie/freqtrade Creating the app git clone https://github.com/joaorafaelm/freqtrade.git && cd freqt

João 51 Aug 29, 2022
Unsupervised clustering of high content screen samples

Microscopium Unsupervised clustering and dataset exploration for high content screens. See microscopium in action Public dataset BBBC021 from the Broa

60 Dec 05, 2022
Deep learning library for solving differential equations and more

DeepXDE Voting on whether we should have a Slack channel for discussion. DeepXDE is a library for scientific machine learning. Use DeepXDE if you need

Lu Lu 1.4k Dec 29, 2022
Benchmarks for Object Detection in Aerial Images

Benchmarks for Object Detection in Aerial Images

Jian Ding 691 Dec 30, 2022
La source de mon module 'pyfade' disponible sur Pypi.

Version: 1.2 Introduction Pyfade est un module permettant de créer des dégradés colorés. Il vous permettra de changer chaque ligne de votre texte par

Billy 20 Sep 12, 2021
Multi-robot collaborative exploration and mapping through Voronoi partition and DRL in unknown environment

Voronoi Multi_Robot Collaborate Exploration Introduction In the unknown environment, the cooperative exploration of multiple robots is completed by Vo

PeaceWord 6 Nov 22, 2022
Code for Mining the Benefits of Two-stage and One-stage HOI Detection

Status: Archive (code is provided as-is, no updates expected) PPO-EWMA [Paper] This is code for training agents using PPO-EWMA and PPG-EWMA, introduce

OpenAI 33 Dec 15, 2022