Experimental Python implementation of OpenVINO Inference Engine (very slow, limited functionality). All codes are written in Python. Easy to read and modify.

Overview

PyOpenVINO - An Experimental Python Implementation of OpenVINO Inference Engine (minimum-set)


Description

The PyOpenVINO is a spin-off product from my deep learning algorithm study work. This project is aiming at neither practical performance nor rich functionalities. PyOpenVINO can load an OpenVINO IR model (.xml/.bin) and run it. The implementation is quite straightforward and naive. No Optimization technique is used. Thus, the code is easy to read and modify. Supported API is quite limited, but it mimics OpenVINO IE Python API. So, you can easily read and modify the sample code too.

  • Developed as a spin-off from my deep learning study work.
  • Very slow and limited functionality. Not a general DL inference engine.
  • Naive and straightforward code: (I hope) This is a good reference for learning deep-learning technology.
  • Extensible ops: Ops are implemented as plugins. You can easily add your ops as needed.

How to run

Steps 1 and 2 are optional since the converted MNIST IR model is provided.

  1. (Optional) Train a model and generate a 'saved_model' with TensorFlow
python mnist-tf-training.py

The trained model data will be created under ./mnist-savedmodel directory.

  1. (Optional) Convert TF saved_model into OpenVINO IR model
    Prerequisite: You need to have OpenVINO installed (Model Optimizer is required).
convert-model.bat

Converted IR model (.xml/.bin) will be generated in ./models directory.

  1. Run pyOpenVINO sample program
python test_pyopenvino.py

You'll see the output like this.

pyopenvino>python test_pyopenvino.py
inputs: [{'name': 'conv2d_input', 'type': 'Parameter', 'version': 'opset1', 'data': {'element_type': 'f32', 'shape': (1, 1, 28, 28)}, 'output': {0: {'precision': 'FP32', 'dims': (1, 1, 28, 28)}}}]
outputs: [{'name': 'Func/StatefulPartitionedCall/output/_11:0', 'type': 'Result', 'version': 'opset1', 'input': {0: {'precision': 'FP32', 'dims': (1, 10)}}}]
# node_name, time (sec)
conv2d_input Parameter, 0.0
conv2d_input/scale_copy Const, 0.0
StatefulPartitionedCall/sequential/conv2d/Conv2D Convolution, 0.11315417289733887
StatefulPartitionedCall/sequential/conv2d/BiasAdd/ReadVariableOp Const, 0.0
StatefulPartitionedCall/sequential/conv2d/BiasAdd/Add Add, 0.0
StatefulPartitionedCall/sequential/conv2d/Relu ReLU, 0.0010142326354980469
StatefulPartitionedCall/sequential/max_pooling2d/MaxPool MaxPool, 0.020931482315063477
StatefulPartitionedCall/sequential/conv2d_1/Conv2D/ReadVariableOp Const, 0.0
StatefulPartitionedCall/sequential/conv2d_1/Conv2D Convolution, 0.04333162307739258
StatefulPartitionedCall/sequential/conv2d_1/BiasAdd/ReadVariableOp Const, 0.0
StatefulPartitionedCall/sequential/conv2d_1/BiasAdd/Add Add, 0.0
StatefulPartitionedCall/sequential/conv2d_1/Relu ReLU, 0.0
StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool MaxPool, 0.006029367446899414
StatefulPartitionedCall/sequential/target_conv_layer/Conv2D/ReadVariableOp Const, 0.0010688304901123047
StatefulPartitionedCall/sequential/target_conv_layer/Conv2D Convolution, 0.004073381423950195
StatefulPartitionedCall/sequential/target_conv_layer/BiasAdd/ReadVariableOp Const, 0.0
StatefulPartitionedCall/sequential/target_conv_layer/BiasAdd/Add Add, 0.0
StatefulPartitionedCall/sequential/target_conv_layer/Relu ReLU, 0.0
StatefulPartitionedCall/sequential/target_conv_layer/Relu/Transpose/value6071024 Const, 0.0
StatefulPartitionedCall/sequential/target_conv_layer/Relu/Transpose Transpose, 0.0
StatefulPartitionedCall/sequential/flatten/Const Const, 0.0
StatefulPartitionedCall/sequential/flatten/Reshape Reshape, 0.0
StatefulPartitionedCall/sequential/dense/MatMul/ReadVariableOp Const, 0.0010004043579101562
StatefulPartitionedCall/sequential/dense/MatMul MatMul, 0.0013704299926757812
StatefulPartitionedCall/sequential/dense/BiasAdd/ReadVariableOp Const, 0.0
StatefulPartitionedCall/sequential/dense/BiasAdd/Add Add, 0.0
StatefulPartitionedCall/sequential/dense/Relu ReLU, 0.0
StatefulPartitionedCall/sequential/dense_1/MatMul/ReadVariableOp Const, 0.0
StatefulPartitionedCall/sequential/dense_1/MatMul MatMul, 0.0
StatefulPartitionedCall/sequential/dense_1/BiasAdd/ReadVariableOp Const, 0.0
StatefulPartitionedCall/sequential/dense_1/BiasAdd/Add Add, 0.0
StatefulPartitionedCall/sequential/dense_1/Softmax SoftMax, 0.0009992122650146484
Func/StatefulPartitionedCall/output/_11:0 Result, 0.0
@TOTAL_TIME, 0.21120882034301758
0.21120882034301758 sec/inf
Raw result: {'Func/StatefulPartitionedCall/output/_11:0': array([[7.8985136e-07, 2.0382247e-08, 9.9999917e-01, 1.0367385e-10,
        1.0184062e-10, 1.6024957e-12, 2.0729640e-10, 1.6014919e-08,
        6.5354638e-10, 9.5946295e-14]], dtype=float32)}
Result: [2 0 1 7 8 6 3 4 5 9]
  1. Run Draw-and-Inter demo
python draw-and-infer.py

How to Operate

  • Left click to draw points.
  • Right click to clear the canvas.
    This demo program is using 'numpy' kernels for performance.
    draw-and-infer

A Littile Description of the Implementation

IR model internal representation

This inference engine uses networkx.DiGraph as the internal representation of the IR model. IR model will be translated into nodes and edges.
The nodes represent the ops, and it holds the attributes of the ops (e.g., strides, dilations, etc.).
The edges represent the connection between the nodes. The edges hold the port number for both ends.
The intermediate output from the nodes (feature maps) will be stored in the data attributes in the output port of the node (G.nodes[node_id_num]['output'][port_num]['data'] = feat_map)

An example of the contents (attributes) of a node

node id= 14
 name : StatefulPartitionedCall/sequential/target_conv_layer/Conv2D
 type : Convolution
 version : opset1
 data :
     auto_pad : valid
     dilations : 1, 1
     pads_begin : 0, 0
     pads_end : 0, 0
     strides : 1, 1
 input :
     0 :
         precision : FP32
         dims : (1, 64, 5, 5)
     1 :
         precision : FP32
         dims : (64, 64, 3, 3)
 output :
     2 :
         precision : FP32
         dims : (1, 64, 3, 3)

An example of the contents of an edge

format = (from-layer, from-port, to-layer, to-port)

edge_id= (0, 2)
   {'connection': (0, 0, 2, 0)}

Ops plugins

Operators are implemented as plugins. You can develop an Op in Python and place the file in the op_plugins directory. The inference_engine of pyOpenVINO will search the Python source files in the op_plugins directory at the start time and register them as the Ops plugin.
The file name of the Ops plugin will be treated as the Op name, so it must match the layer type attribute field in the IR XML file.
The inference engine will call the compute() function of the plugin to perform the calculation. The compute() function is the only API between the inference engine and the plugin. The inference engine will collect the required input data and pass it to the compute() function. The input data is in the form of Python dict. ({port_num:data[, port_num:data[, ...]]})
The op needs to calculate the result from the input data and return it as a Python dict. ({port_num:result[, port_num:result[, ...]]})

Kernel implementation: NumPy version and Naive version

Not all, but some Ops have dual kernel implementation, a naive implementation (easy to read), and a NumPy version implementation (a bit faster).
The NumPy version might be x10+ faster than the naive version.
The kernel type can be specified with Executable_Network.kernel_type attribute. You can specify eitgher one of 'naive' (default) or 'numpy'. Please refer to the sample program test_pyopenvino.py for the details.

END

Owner
Yasunori Shimura
Yasunori Shimura
Automatically download the cwru data set, and then divide it into training data set and test data set

Automatically download the cwru data set, and then divide it into training data set and test data set.自动下载cwru数据集,然后分训练数据集和测试数据集

6 Jun 27, 2022
In this project we investigate the performance of the SetCon model on realistic video footage. Therefore, we implemented the model in PyTorch and tested the model on two example videos.

Contrastive Learning of Object Representations Supervisor: Prof. Dr. Gemma Roig Institutions: Goethe University CVAI - Computational Vision & Artifici

Dirk Neuhäuser 6 Dec 08, 2022
DeepMind Alchemy task environment: a meta-reinforcement learning benchmark

The DeepMind Alchemy environment is a meta-reinforcement learning benchmark that presents tasks sampled from a task distribution with deep underlying structure.

DeepMind 188 Dec 25, 2022
Multi-modal co-attention for drug-target interaction annotation and Its Application to SARS-CoV-2

CoaDTI Multi-modal co-attention for drug-target interaction annotation and Its Application to SARS-CoV-2 Abstract Environment The test was conducted i

Layne_Huang 7 Nov 14, 2022
A Pythonic library for Nvidia Codec.

A Pythonic library for Nvidia Codec. The project is still in active development; expect breaking changes. Why another Python library for Nvidia Codec?

Zesen Qian 12 Dec 27, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Jan 04, 2023
GDSC-ML Team Interview Task

GDSC-ML-Team---Interview-Task Task 1 : Clean or Messy room In this task we have to classify the given test images as clean or messy. - Link for datase

Aayush. 1 Jan 19, 2022
TCNN Temporal convolutional neural network for real-time speech enhancement in the time domain

TCNN Pandey A, Wang D L. TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain[C]//ICASSP 2019-2019 IEEE Int

凌逆战 16 Dec 30, 2022
Coursera - Quiz & Assignment of Coursera

Coursera Assignments This repository is aimed to help Coursera learners who have difficulties in their learning process. The quiz and programming home

浅梦 828 Jan 04, 2023
This repo is customed for VisDrone.

Object Detection for VisDrone(无人机航拍图像目标检测) My environment 1、Windows10 (Linux available) 2、tensorflow = 1.12.0 3、python3.6 (anaconda) 4、cv2 5、ensemble

53 Jul 17, 2022
Python implementation of NARS (Non-Axiomatic-Reasoning-System)

Python implementation of NARS (Non-Axiomatic-Reasoning-System)

Bowen XU 11 Dec 20, 2022
code for EMNLP 2019 paper Text Summarization with Pretrained Encoders

PreSumm This code is for EMNLP 2019 paper Text Summarization with Pretrained Encoders Updates Jan 22 2020: Now you can Summarize Raw Text Input!. Swit

Yang Liu 1.2k Dec 28, 2022
NeRD: Neural Reflectance Decomposition from Image Collections

NeRD: Neural Reflectance Decomposition from Image Collections Project Page | Video | Paper | Dataset Implementation for NeRD. A novel method which dec

Computergraphics (University of Tübingen) 195 Dec 29, 2022
Official implementation of Protected Attribute Suppression System, ICCV 2021

Official implementation of Protected Attribute Suppression System, ICCV 2021

Prithviraj Dhar 6 Jan 01, 2023
Examples of using f2py to get high-speed Fortran integrated with Python easily

f2py Examples Simple examples of using f2py to get high-speed Fortran integrated with Python easily. These examples are also useful to troubleshoot pr

Michael 35 Aug 21, 2022
Official implementation of Rich Semantics Improve Few-Shot Learning (BMVC, 2021)

Rich Semantics Improve Few-Shot Learning Paper Link Abstract : Human learning benefits from multi-modal inputs that often appear as rich semantics (e.

Mohamed Afham 11 Jul 26, 2022
Code release for "COTR: Correspondence Transformer for Matching Across Images"

COTR: Correspondence Transformer for Matching Across Images This repository contains the inference code for COTR. We plan to release the training code

UBC Computer Vision Group 360 Jan 06, 2023
Repository for RNNs using TensorFlow and Keras - LSTM and GRU Implementation from Scratch - Simple Classification and Regression Problem using RNNs

RNN 01- RNN_Classification Simple RNN training for classification task of 3 signal: Sine, Square, Triangle. 02- RNN_Regression Simple RNN training for

Nahid Ebrahimian 13 Dec 13, 2022
SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model

SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model Edresson Casanova, Christopher Shulby, Eren Gölge, Nicolas Michael Müller, Frede

Edresson Casanova 92 Dec 09, 2022
LeafSnap replicated using deep neural networks to test accuracy compared to traditional computer vision methods.

Deep-Leafsnap Convolutional Neural Networks have become largely popular in image tasks such as image classification recently largely due to to Krizhev

Sujith Vishwajith 48 Nov 27, 2022