As a part of the HAKE project, includes the reproduced SOTA models and the corresponding HAKE-enhanced versions (CVPR2020).

Overview

HAKE-Action

HAKE-Action (TensorFlow) is a project to open the SOTA action understanding studies based on our Human Activity Knowledge Engine. It includes reproduced SOTA models and their HAKE-enhanced versions. HAKE-Action is authored by Yong-Lu Li, Xinpeng Liu, Liang Xu, Cewu Lu. Currently, it is manintained by Yong-Lu Li, Xinpeng Liu and Liang Xu.

News: (2021.10.06) Our extended version of SymNet is accepted by TPAMI! Paper and code are coming soon.

(2021.2.7) Upgraded HAKE-Activity2Vec is released! Images/Videos --> human box + ID + skeleton + part states + action + representation. [Description]

Full demo: [YouTube], [bilibili]

(2021.1.15) Our extended version of TIN (Transferable Interactiveness Network) is accepted by TPAMI! New paper and code will be released soon.

(2020.10.27) The code of IDN (Paper) in NeurIPS'20 is released!

(2020.6.16) Our larger version HAKE-Large (>120K images, activity and part state labels) is released!

We released the HAKE-HICO (image-level part state labels upon HICO) and HAKE-HICO-DET (instance-level part state labels upon HICO-DET). The corresponding data can be found here: HAKE Data.

  • Paper is here.
  • More data and part states (e.g., upon AVA, more kinds of action categories, more rare actions...) are coming.
  • We will keep updating HAKE-Action to include more SOTA models and their HAKE-enhanced versions.

Data Mode

  • HAKE-HICO (PaStaNet* mode in paper): image-level, add the aggression of all part states in an image (belong to one or multiple active persons), compared with original HICO, the only additional labels are image-level human body part states.

  • HAKE-HICO-DET (PaStaNet* in paper): instance-level, add part states for each annotated persons of all images in HICO-DET, the only additional labels are instance-level human body part states.

  • HAKE-Large (PaStaNet in paper): contains more than 120K images, action labels and the corresponding part state labels. The images come from the existing action datasets and crowdsourcing. We mannully annotated all the active persons with our novel part-level semantics.

  • GT-HAKE (GT-PaStaNet* in paper): GT-HAKE-HICO and G-HAKE-HICO-DET. It means that we use the part state labels as the part state prediction. That is, we can perfectly estimate the body part states of a person. Then we use them to infer the instance activities. This mode can be seen as the upper bound of our HAKE-Action. From the results below we can find that, the upper bound is far beyond the SOTA performance. Thus, except for the current study on the conventional instance-level method, continue promoting part-level method based on HAKE would be a very promising direction.

Notion

Activity2Vec and PaSta-R are our part state based modules, which operate action inference based on part semantics, different from previous instance semantics. For example, Pairwise + HAKE-HICO pre-trained Activity2Vec + Linear PaSta-R (the seventh row) achieves 45.9 mAP on HICO. More details can be found in our CVPR2020 paper: PaStaNet: Toward Human Activity Knowledge Engine.

Code

The two versions of HAKE-Action are relesased in two branches of this repo:

Models on HICO

Instance-level +Activity2Vec +PaSta-R mAP [email protected] [email protected] [email protected]
R*CNN - - 28.5 - - -
Girdhar et.al. - - 34.6 - - -
Mallya et.al. - - 36.1 - - -
Pairwise - - 39.9 13.0 19.8 22.3
- HAKE-HICO Linear 44.5 26.9 30.0 30.7
Mallya et.al. HAKE-HICO Linear 45.0 26.5 29.1 30.3
Pairwise HAKE-HICO Linear 45.9 26.2 30.6 31.8
Pairwise HAKE-HICO MLP 45.6 26.0 30.8 31.9
Pairwise HAKE-HICO GCN 45.6 25.2 30.0 31.4
Pairwise HAKE-HICO Seq 45.9 25.3 30.2 31.6
Pairwise HAKE-HICO Tree 45.8 24.9 30.3 31.8
Pairwise HAKE-Large Linear 46.3 24.7 31.8 33.1
Pairwise HAKE-Large Linear 46.3 24.7 31.8 33.1
Pairwise GT-HAKE-HICO Linear 65.6 47.5 55.4 56.6

Models on HICO-DET

Using Object Detections from iCAN

Instance-level +Activity2Vec +PaSta-R Full(def) Rare(def) None-Rare(def) Full(ko) Rare(ko) None-Rare(ko)
iCAN - - 14.84 10.45 16.15 16.26 11.33 17.73
TIN - - 17.03 13.42 18.11 19.17 15.51 20.26
iCAN HAKE-HICO-DET Linear 19.61 17.29 20.30 22.10 20.46 22.59
TIN HAKE-HICO-DET Linear 22.12 20.19 22.69 24.06 22.19 24.62
TIN HAKE-Large Linear 22.65 21.17 23.09 24.53 23.00 24.99
TIN GT-HAKE-HICO-DET Linear 34.86 42.83 32.48 35.59 42.94 33.40

Models on AVA (Frame-based)

Method +Activity2Vec +PaSta-R mAP
AVA-TF-Baseline - - 11.4
LFB-Res-50-baseline - - 22.2
LFB-Res-101-baseline - - 23.3
AVA-TF-Baeline HAKE-Large Linear 15.6
LFB-Res-50-baseline HAKE-Large Linear 23.4
LFB-Res-101-baseline HAKE-Large Linear 24.3

Models on V-COCO

Method +Activity2Vec +PaSta-R AP(role), Scenario 1 AP(role), Scenario 2
iCAN - - 45.3 52.4
TIN - - 47.8 54.2
iCAN HAKE-Large Linear 49.2 55.6
TIN HAKE-Large Linear 51.0 57.5

Training Details

We first pre-train the Activity2Vec and PaSta-R with activities and PaSta labels. Then we change the last FC in PaSta-R to fit the activity categories of the target dataset. Finally, we freeze Activity2Vec and fine-tune PaSta-R on the train set of the target dataset. Here, HAKE works like the ImageNet and Activity2Vec is used as a pre-trained knowledge engine to promote other tasks.

Citation

If you find our work useful, please consider citing:

@inproceedings{li2020pastanet,
  title={PaStaNet: Toward Human Activity Knowledge Engine},
  author={Li, Yong-Lu and Xu, Liang and Liu, Xinpeng and Huang, Xijie and Xu, Yue and Wang, Shiyi and Fang, Hao-Shu and Ma, Ze and Chen, Mingyang and Lu, Cewu},
  booktitle={CVPR},
  year={2020}
}
@inproceedings{li2019transferable,
  title={Transferable Interactiveness Knowledge for Human-Object Interaction Detection},
  author={Li, Yong-Lu and Zhou, Siyuan and Huang, Xijie and Xu, Liang and Ma, Ze and Fang, Hao-Shu and Wang, Yanfeng and Lu, Cewu},
  booktitle={CVPR},
  year={2019}
}
@inproceedings{lu2018beyond,
  title={Beyond holistic object recognition: Enriching image understanding with part states},
  author={Lu, Cewu and Su, Hao and Li, Yonglu and Lu, Yongyi and Yi, Li and Tang, Chi-Keung and Guibas, Leonidas J},
  booktitle={CVPR},
  year={2018}
}

HAKE

HAKE[website] is a new large-scale knowledge base and engine for human activity understanding. HAKE provides elaborate and abundant body part state labels for active human instances in a large scale of images and videos. With HAKE, we boost the action understanding performance on widely-used human activity benchmarks. Now we are still enlarging and enriching it, and looking forward to working with outstanding researchers around the world on its applications and further improvements. If you have any pieces of advice or interests, please feel free to contact Yong-Lu Li ([email protected]).

If you get any problems or if you find any bugs, don't hesitate to comment on GitHub or make a pull request!

HAKE-Action is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail. We will send the detail agreement to you.

Owner
Yong-Lu Li
Ph.D. CV_Robotics
Yong-Lu Li
Object DGCNN and DETR3D, Our implementations are built on top of MMdetection3D.

Object DGCNN & DETR3D This repo contains the implementations of Object DGCNN (https://arxiv.org/abs/2110.06923) and DETR3D (https://arxiv.org/abs/2110

Wang, Yue 539 Jan 07, 2023
NeurIPS 2021 Datasets and Benchmarks Track

AP-10K: A Benchmark for Animal Pose Estimation in the Wild Introduction | Updates | Overview | Download | Training Code | Key Questions | License Intr

AP-10K 82 Dec 11, 2022
Retinal Vessel Segmentation with Pixel-wise Adaptive Filters (ISBI 2022)

Retinal Vessel Segmentation with Pixel-wise Adaptive Filters (ISBI 2022) Introdu

anonymous 14 Oct 27, 2022
OcclusionFusion: realtime dynamic 3D reconstruction based on single-view RGB-D

OcclusionFusion (CVPR'2022) Project Page | Paper | Video Overview This repository contains the code for the CVPR 2022 paper OcclusionFusion, where we

Wenbin Lin 193 Dec 15, 2022
Lightweight stereo matching network based on MobileNetV1 and MobileNetV2

MobileStereoNet: Towards Lightweight Deep Networks for Stereo Matching

Cognitive Systems Research Group 139 Nov 30, 2022
Personalized Federated Learning using Pytorch (pFedMe)

Personalized Federated Learning with Moreau Envelopes (NeurIPS 2020) This repository implements all experiments in the paper Personalized Federated Le

Charlie Dinh 226 Dec 30, 2022
This code is 3d-CNN model that can predict environmental value

Predict-environmental-value-3dCNN This code is 3d-CNN model that can predict environmental value. Firstly, I built a model that can create a lot of bu

1 Jan 06, 2022
Project dự đoán giá cổ phiếu bằng thuật toán LSTM gồm: code train và code demo

Web predicts stock prices using Long - Short Term Memory algorithm Give me some start please!!! User interface image: Choose: DayBegin, DayEnd, Stock

Vo Thuong Truong Nhon 8 Nov 11, 2022
QMagFace: Simple and Accurate Quality-Aware Face Recognition

Quality-Aware Face Recognition 26.11.2021 start readme QMagFace: Simple and Accurate Quality-Aware Face Recognition Research Paper Implementation - To

Philipp Terhörst 59 Jan 04, 2023
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

117 Dec 28, 2022
the code used for the preprint Embedding-based Instance Segmentation of Microscopy Images.

EmbedSeg Introduction This repository hosts the version of the code used for the preprint Embedding-based Instance Segmentation of Microscopy Images.

JugLab 88 Dec 25, 2022
FMA: A Dataset For Music Analysis

FMA: A Dataset For Music Analysis Michaël Defferrard, Kirell Benzi, Pierre Vandergheynst, Xavier Bresson. International Society for Music Information

Michaël Defferrard 1.8k Dec 29, 2022
Semi-supervised Implicit Scene Completion from Sparse LiDAR

Semi-supervised Implicit Scene Completion from Sparse LiDAR Paper Created by Pengfei Li, Yongliang Shi, Tianyu Liu, Hao Zhao, Guyue Zhou and YA-QIN ZH

114 Nov 30, 2022
Large scale PTM - PPI relation extraction

Large-scale protein-protein post-translational modification extraction with distant supervision and confidence calibrated BioBERT The silver standard

1 Feb 25, 2022
The official implementation of Theme Transformer

Theme Transformer This is the official implementation of Theme Transformer. Checkout our demo and paper : Demo | arXiv Environment: using python versi

Ian Shih 85 Dec 08, 2022
Contrastively Disentangled Sequential Variational Audoencoder

Contrastively Disentangled Sequential Variational Audoencoder (C-DSVAE) Overview This is the implementation for our C-DSVAE, a novel self-supervised d

Junwen Bai 35 Dec 24, 2022
🕵 Artificial Intelligence for social control of public administration

Non-tech crash course into Operação Serenata de Amor Tech crash course into Operação Serenata de Amor Contributing with code and tech skills Supportin

Open Knowledge Brasil - Rede pelo Conhecimento Livre 4.4k Dec 31, 2022
Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-Pixel Part Segmentation [3DV 2021 Oral]

Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-Pixel Part Segmentation [3DV 2021 Oral] Learning to Disambiguate Strongly In

Zicong Fan 40 Dec 22, 2022
Affine / perspective transformation in Pose Estimation with Tensorflow 2

Pose Transformation Affine / Perspective transformation in Pose Estimation with Tensorflow 2 Introduction 이 repo는 pose estimation을 연구하고 개발하는 데 도움이 되기

Kim Junho 1 Dec 22, 2021
A bunch of random PyTorch models using PyTorch's C++ frontend

PyTorch Deep Learning Models using the C++ frontend Gettting started Clone the repo 1. https://github.com/mrdvince/pytorchcpp 2. cd fashionmnist or

Vince 0 Jul 13, 2021