Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases.

Overview



The templated deep learning framework, enabling framework-agnostic functions, layers and libraries.

Contents

Overview

What is Ivy?

Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases. Ivy wraps the functional APIs of existing frameworks. Framework-agnostic functions, libraries and layers can then be written using Ivy, with simultaneous support for all frameworks. Ivy currently supports Jax, TensorFlow, PyTorch, MXNet and Numpy. Check out the docs for more info!

Ivy Libraries

There are a host of derived libraries written in Ivy, in the areas of mechanics, 3D vision, robotics, differentiable memory, and differentiable gym environments. Click on the icons below for their respective github pages.


Quick Start

Ivy can be installed like so: pip install ivy-core

To get started, you can immediately use ivy with your deep learning framework of choice. In the example below we show how ivy's concatenation function is compatible with tensors from different frameworks.

import jax.numpy as jnp
import tensorflow as tf
import numpy as np
import mxnet as mx
import torch

import ivy

jax_concatted = ivy.concatenate((jnp.ones((1,)), jnp.ones((1,))), -1)
tf_concatted = ivy.concatenate((tf.ones((1,)), tf.ones((1,))), -1)
np_concatted = ivy.concatenate((np.ones((1,)), np.ones((1,))), -1)
mx_concatted = ivy.concatenate((mx.nd.ones((1,)), mx.nd.ones((1,))), -1)
torch_concatted = ivy.concatenate((torch.ones((1,)), torch.ones((1,))), -1)

To see a list of all Ivy methods, type ivy. into a python command prompt and press tab. You should then see output like the following:

docs/partial_source/images/ivy_tab.png

Based on this short code sample alone, you may wonder, why is this helpful? Don't most developers stick to just one framework for a project? This is indeed the case, and the benefit of Ivy is not the ability to combine different frameworks in a single project.

So what is the benefit of Ivy?

In a Nutshell

Ivy's strength arises when we want to maximize the usability of our code.

We can write a set of functions once in Ivy, and share these with the community so that all developers can use them, irrespective of their personal choice of framework. TensorFlow? PyTorch? Jax? With Ivy functions it doesn't matter!

This makes it very simple to create highly portable deep learning codebases. The core idea behind Ivy is captured by the example of the ivy.clip function below.

On it's own this may not seem very exciting, there are more interesting things to do in deep learning than clip tensors. Ivy is a building block for more interesting applications.

For example, the Ivy libraries for mechanics, 3D vision, robotics, and differentiable environments are all written in pure Ivy. These libraries provide fully differentiable implementations of various applied functions, primed for integration in end-to-end networks, for users of any deep-learning framework.

Another benefit of Ivy is user flexibility. By keeping the Ivy abstraction lightweight and fully functional, this keeps you in full control of your code. The schematic below emphasizes that you can choose to develop at any abstraction level.

You can code entirely in Ivy, or mainly in their native DL framework, with a small amount of Ivy code. This is entirely up to you, depending on how many Ivy functions you need from existing Ivy libraries, and how much new Ivy code you add into your own project, to maximize it's audience when sharing online.

Where Next?

So, now that you've got the gist of Ivy, and why it's useful. Where to next?

This depends on whether you see yourself in the short term as more likely to be an Ivy library user or an Ivy library contributor.

If you would like to use the existing set of Ivy libraries, dragging and dropping key functions into your own project, then we suggest you dive into some of the demos for the various Ivy libraries currently on offer. Simply open up the main docs, then open the library-specific docs linked on the bottom left, and check out the demos folder in the library repo.

On the other hand, if you have your own new library in mind, or if you would like to implement parts of your own project in Ivy to maximise it's portability, then we recommend checking out the page Writing Ivy in the docs. Here, we dive a bit deeper into the Ivy framework, and the best coding practices to get the most out of Ivy for your own codebases and libraries.

Citation

@article{lenton2021ivy,
  title={Ivy: Templated Deep Learning for Inter-Framework Portability},
  author={Lenton, Daniel and Pardo, Fabio and Falck, Fabian and James, Stephen and Clark, Ronald},
  journal={arXiv preprint arXiv:2102.02886},
  year={2021}
}
Comments
  • Create numpy diagonal

    Create numpy diagonal

    diagonal #6616. Kindly mark a green circle on it. So there will be no conflict in the future. I already experienced that thing. https://github.com/unifyai/ivy/issues/6616.

    TensorFlow Frontend NumPy Frontend Array API Ivy Functional API 
    opened by hrak99 59
  • Add Statistical functions mean numpy frontend #2546

    Add Statistical functions mean numpy frontend #2546

    Greetings i think i did everything i did the frontend the tests as well and changed the init files i did the mean function according to the numpy documentation waiting for your reply. Best regards.

    opened by Emperor-WS 26
  • Isin extension

    Isin extension

    #5716

    added most backend implementations there is only problem with tensorflow I'm still trying to solve since it doesnt have the function isin, once I'm able to do that I will add tests

    Array API Function Reformatting Ivy Functional API Ivy API Experimental 
    opened by pillarxyz 20
  • reformat shape_to_tuple

    reformat shape_to_tuple

    Hi, I've got a question on testings. I was getting errors, so I checked the logs and I found out that some of those tests aren't ready yet (e.g.: shape_to_tuple). Not sure if I'm right, but it'll be awesome if you give some information about this. Thank you.

    opened by mcandemir 19
  • feat: add is_tensor to tensorflow frontend general functions

    feat: add is_tensor to tensorflow frontend general functions

    Close #7584 Need help with PyTest, I am unable to wrap my head around the testing helpers yet.

    Essentially, when I run these tests, I get the same error, despite trying various combinations of the parameters passed to the test_frontend_function

    TensorFlow Frontend 
    opened by chtnnh 18
  • argmax function: general.py

    argmax function: general.py

    Test Cases:

    • 42 passed for pytest ./ivy/ivy_tests/test_functional/test_core/test_general.py::test_argmax --disable-warnings -rs
    • 6 skipped for conftest.py
    • No errors

    Implemented for

    • [x] jax
    • [x] numpy
    • [x] mxnet
    • [x] tensorflow
    • [x] torch
    Array API Single Function 
    opened by 7wikd 18
  • Added PadV2 to raw_ops

    Added PadV2 to raw_ops

    Closes https://github.com/unifyai/ivy/issues/9394 Please that this PR is based on https://github.com/unifyai/ivy/pull/9461 as they have common functionality

    TensorFlow Frontend 
    opened by KareemMAX 0
Releases(v1.1.9)
  • v1.1.5(Jul 26, 2021)

    Version 1.1.5.

    Added some new methods and classes, improved the ivy.Module and ivy.Container classes. ivy.Container now overrides more built-in methods, and has more flexible nested methods such as gather_nd, repeat, stop_gradients etc.

    This version was tested against: JAX 0.2.17 JAXLib 0.1.69 TensorFlow 2.5.0 TensorFlow Addons 0.13.0 TensorFlow Probability 0.13.0 PyTorch 1.9.0 MXNet 1.8.0 NumPy 1.19.5

    However, Ivy 1.1.5 inevitably supports many previous and future backend versions, due to the stability of the core APIs for each backend framework.

    Source code(tar.gz)
    Source code(zip)
  • v1.1.4(Apr 12, 2021)

    Version 1.1.4.

    Added some new methods, fixed some small bugs, improved unit testing, and tested against the latest backend versions.

    This version was tested against: JAX 0.2.12 TensorFlow 2.4.1 PyTorch 1.8.1 MXNet 1.8.0 NumPy 1.20.2

    However, Ivy 1.1.4 inevitably supports many previous and future backend versions, due to the stability of the core APIs for each backend framework.

    Source code(tar.gz)
    Source code(zip)
  • v1.1.3(Mar 19, 2021)

    Version 1.1.3.

    Added some new methods, fixed some small bugs, improved unit testing, and tested against the latest backend versions.

    This version was tested against: JAX 0.2.10 TensorFlow 2.4.1 PyTorch 1.8.0 MXNet 1.7.0 NumPy 1.19.5

    However, Ivy 1.1.3 likely supports many previous and future backend versions, due to the stability of the core APIs for each backend framework.

    Source code(tar.gz)
    Source code(zip)
  • v1.1.2(Feb 27, 2021)

    Version 1.1.2.

    Added adam update, changed gradient methdos to operate on gradient dicts instead of lists, added new container chain chain method, among other small changes.

    This version was tested against: JAX 0.2.9 TensorFlow 2.4.1 PyTorch 1.7.1 MXNet 1.7.0 NumPy 1.19.5

    However, Ivy 1.1.2 likely supports many previous and future backend versions, due to the stability of the core APIs for each backend framework.

    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(Feb 10, 2021)

Owner
Ivy
The Templated Deep Learning Framework
Ivy
Hcaptcha-challenger - Gracefully face hCaptcha challenge with Yolov5(ONNX) embedded solution

hCaptcha Challenger 🚀 Gracefully face hCaptcha challenge with Yolov5(ONNX) embe

593 Jan 03, 2023
Video-face-extractor - Video face extractor with Python

Python face extractor Setup Create the srcvideos and faces directories Put your

2 Feb 03, 2022
Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

Facebook Research 171 Nov 23, 2022
A Pytorch reproduction of Range Loss, which is proposed in paper 《Range Loss for Deep Face Recognition with Long-Tailed Training Data》

RangeLoss Pytorch This is a Pytorch reproduction of Range Loss, which is proposed in paper 《Range Loss for Deep Face Recognition with Long-Tailed Trai

Youzhi Gu 7 Nov 27, 2021
A novel benchmark dataset for Monocular Layout prediction

AutoLay AutoLay: Benchmarking Monocular Layout Estimation Kaustubh Mani, N. Sai Shankar, J. Krishna Murthy, and K. Madhava Krishna Abstract In this pa

Kaustubh Mani 39 Apr 26, 2022
3DV 2021: Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry

SynergyNet 3DV 2021: Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry Cho-Ying Wu, Qiangeng Xu, Ulrich Neumann, CGIT Lab at Unive

Cho-Ying Wu 239 Jan 06, 2023
Supervised domain-agnostic prediction framework for probabilistic modelling

A supervised domain-agnostic framework that allows for probabilistic modelling, namely the prediction of probability distributions for individual data

The Alan Turing Institute 112 Oct 23, 2022
LRBoost is a scikit-learn compatible approach to performing linear residual based stacking/boosting.

LRBoost is a sckit-learn compatible package for linear residual boosting. LRBoost combines a linear estimator and a non-linear estimator to leverage t

Andrew Patton 5 Nov 23, 2022
How Do Adam and Training Strategies Help BNNs Optimization? In ICML 2021.

AdamBNN This is the pytorch implementation of our paper "How Do Adam and Training Strategies Help BNNs Optimization?", published in ICML 2021. In this

Zechun Liu 47 Sep 20, 2022
This dlib-based facial login system

Facial-Login-System This dlib-based facial login system is a technology capable of matching a human face from a digital webcam frame capture against a

Mushahid Ali 3 Apr 23, 2022
Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing

Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing Paper Introduction Multi-task indoor scene understanding is widely considered a

62 Dec 05, 2022
The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"

Hierarchical Token Semantic Audio Transformer Introduction The Code Repository for "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound

Knut(Ke) Chen 134 Jan 01, 2023
This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures using receptive field analysis (RFA) and create graph visualizations of your architecture.

ReceptiveFieldAnalysisToolbox This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures usin

84 Nov 23, 2022
Heart Arrhythmia Classification

This program takes and input of an ECG in European Data Format (EDF) and outputs the classification for heartbeats into normal vs different types of arrhythmia . It uses a deep learning model for cla

4 Nov 02, 2022
Torch implementation of various types of GAN (e.g. DCGAN, ALI, Context-encoder, DiscoGAN, CycleGAN, EBGAN, LSGAN)

gans-collection.torch Torch implementation of various types of GANs (e.g. DCGAN, ALI, Context-encoder, DiscoGAN, CycleGAN, EBGAN). Note that EBGAN and

Minchul Shin 53 Jan 22, 2022
University of Rochester 2021 Summer REU focusing on music sentiment transfer using CycleGAN

Music-Sentiment-Transfer University of Rochester 2021 Summer REU focusing on music sentiment transfer using CycleGAN Poster: Music Sentiment Transfer

Miles Sigel 2 Jan 24, 2022
HeartRate detector with ArduinoandPython - Use Arduino and Python create a heartrate detector.

Syllabus of Contents Syllabus of Contents Introduction Of Project Features Develop With Python code introduction Installation License Developer Contac

1 Jan 05, 2022
This is the pytorch code for the paper Curious Representation Learning for Embodied Intelligence.

Curious Representation Learning for Embodied Intelligence This is the pytorch code for the paper Curious Representation Learning for Embodied Intellig

19 Oct 19, 2022
Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs Check out the paper on arXiv: https://arxiv.org/abs/2103.13744 This repo cont

Christian Reiser 373 Dec 20, 2022
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022