PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos

Overview

project-pykale


tests Documentation Status PyPI version PyPI downloads

Getting Started | Documentation | Contributing | Discussions | Changelog

PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos. By adopting a unified pipeline-based API design, PyKale enforces standardization and minimalism, via reusing existing resources, reducing repetitions and redundancy, and recycling learning models across areas. PyKale aims to facilitate interdisciplinary, knowledge-aware machine learning research for graphs, images, texts, and videos in applications including bioinformatics, graph analysis, image/video recognition, and medical imaging. It focuses on leveraging knowledge from multiple sources for accurate and interpretable prediction. See a 12-minute introduction video on YouTube.

Pipeline-based core API (generic and reusable)

  • loaddata loads data from disk or online resources as in input
  • prepdata preprocesses data to fit machine learning modules below (transforms)
  • embed embeds data in a new space to learn a new representation (feature extraction/selection)
  • predict predicts a desired output
  • evaluate evaluates the performance using some metrics
  • interpret interprets the features and outputs via post-prediction analysis mainly via visualisation
  • pipeline specifies a machine learning workflow by combining several other modules

Example usage

  • examples demonstrate real applications on specific datasets.

Installation

Simple installation from PyPI:

pip install pykale

For more details and other options, please refer to the installation guide.

Examples, Tutorials, and Discussions

See our numerous examples (and tutorials) on how to perform various prediction tasks in a wide range of applications using PyKale.

Ask and answer questions on PyKale's GitHub Discussions tab.

Contributing

We appreciate all contributions. You can contribute in three ways:

  • Star and fork PyKale to follow its latest developments, share it with your networks, and ask questions about it.
  • Use PyKale in your project and let us know any bugs (& fixes) and feature requests/suggestions via creating an issue.
  • Contribute via branch, fork, and pull for minor fixes and new features, functions, or examples to become one of the contributors.

See contributing guidelines for more details. You can also reach us via email if needed. The participation in this open source project is subject to Code of Conduct.

The Team

PyKale is primarily maintained by a group of researchers at the University of Sheffield: Haiping Lu, Raivo Koot, Xianyuan Liu, Shuo Zhou, Peizhen Bai, and Robert Turner.

We would like to thank our other contributors including (but not limited to) Cameron McWilliam, David Jones, and Will Furnass.

Citation

    @Misc{pykale2021,
      author =   {Haiping Lu and Raivo Koot and Xianyuan Liu and Shuo Zhou and Peizhen Bai and Robert Turner},
      title =    {{PyKale}: Knowledge-aware machine learning from multiple sources in Python},
      howpublished = {\url{https://github.com/pykale/pykale}},
      year = {2021}
    }

Acknowledgements

The development of PyKale is partially supported by the following project(s).

  • Wellcome Trust Innovator Awards: Digital Technologies Ref 215799/Z/19/Z "Developing a Machine Learning Tool to Improve Prognostic and Treatment Response Assessment on Cardiac MRI Data".
Comments
  • Digits notebook

    Digits notebook

    Associated with https://github.com/pykale/pykale/discussions/147

    Description

    Idea is to test the feasibility of adding an interactive notebook e.g. with myBinder or Google Collab (or both).

    Status

    Work in progress

    On the right (delete these after selection):

    • Select a reviewer if ready for review. Use the suggested one if unsure.

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    work-in-progress 
    opened by bobturneruk 37
  • Notebook tutorial for the bindingdb_deepdta example

    Notebook tutorial for the bindingdb_deepdta example

    Fixes #164.

    Description

    Adds a notebook tutorial for the bindingdb_deepdta example.

    Status

    Work in progress

    On the right (delete these after selection):

    • Select a reviewer if ready for review. Use the suggested one if unsure.

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    documentation work-in-progress 
    opened by bobturneruk 25
  • Test strategy

    Test strategy

    Fixes NA.

    Description

    Lays the foundations for adding unit and regression tests to pykale, and reporting the coverage to codecov.

    A folder structure is proposed that mimics that of the pykale module. Each .py file in pykale is given a tests_<filename>.py which is to contain unit tests for all functionality in its companion file.

    Regression tests are given a separate folder. I propose these are based on existing examples.

    A folder has been created (with an empty .csv file) to hold data needed for testing. Depending on amount and format of required data this will need to be revisited. For example a few baseline .csv files would be fine, but loads of images or video may be better held elsewhere e.g. linked via a DOI.

    The following must be executed by CI post tests to report data to codecov:

    coverage xml
    bash <(curl -s https://codecov.io/bash)
    

    Status

    Work in progress

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    work-in-progress 
    opened by bobturneruk 25
  • Contributing guidelines and pre-commit hooks

    Contributing guidelines and pre-commit hooks

    Description

    Major update to contribution guidelines and pre-commit hooks ready to be enforced.

    • Contribution guidelines have been updated and will guide our future developments. Please review it by reading through it completely. Your feedback is very important. Imagine you are new to PyKale and see whether by reading this doc you know what and how to do for various involvements (except release and management). This doc will be very important for us to standardize our practices and for new comers to join this effort.
    • ReadMe has been lightly updated on how to contribute.
    • Pre-commit hooks have been tested locally and configurations have been updated. The only hook not yet active is mypy, which needs deeper changes to code that the code authors should do (to discuss).
    • Auto-fix some style issues using pre-commit hooks. You do not have to review these changes. They are minor (mostly whitespace/ending problems) and I have randomly checked to be safe. See the commit description if you want to learn more.
      • The changes to requirements.txt are not ideal but we will revisit them later when discussing dependancies.
    • Further manual fixes for remaining flake8 errors, all in the imports of main.py of Examples. Now all sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../.."))) in Examples have been removed. You should install (the latest) PyKale to use kale API in examples.
    • Simplified PR template.

    Summary: Major changes to CONTRIBUTING.md that need all to review carefully. Minor changes to README.md and others that you may skip if you like.

    We will discuss any unresolved issues in our PyKale meeting this Thursday. We should all use pre-commit hooks after resolving all questions. Of course, you are welcome to try out first (see contributing guidelines). I did and it is beautiful.

    Status

    Ready

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    enhancement 
    opened by haipinglu 19
  • Re target tests

    Re target tests

    Fixes NA.

    Description

    Moves CI tests to a better range of python versions.

    Status

    Completed

    If you've been given access to pykale with role Triage or above, on the right (delete these after selection):

    • When your pull request is ready for review, select a reviewer. Use the suggested one if unsure.

    If you cannot see an option to select a reviewer/label, that means one maintainer will get notified upon your pull request and then select for you.

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    tests 
    opened by bobturneruk 18
  • An attempt at an interface to csv text files containing numeric data.…

    An attempt at an interface to csv text files containing numeric data.…

    … Output is formatted like an image

    Fixes #{issue_number}.

    Description

    Provide an interface to read numeric data from comma separated text. The text file should contain tabular data with one row per example. Features are represented as columns. Additional columns can differentiate subjects, domains, categories, and example indexes. It is currently required for the data to be "balanced": same number of examples for each subject, domain, and category. This is because the function returns a numpy array, similar to an image.

    Status

    Ready/Work in progress/Hold

    If you've been given access to pykale with role Triage or above, on the right (delete these after selection):

    • Select one most appropriate label.
    • When your pull request is ready for review, select a reviewer. Use the suggested one if unsure.

    If you cannot see an option to select a reviewer/label, that means one maintainer will get notified upon your pull request and then select for you.

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [ ] Breaking change (fix or new feature that would cause existing functionality to change).
    • [ ] New tests added to cover the changes.
    • [ ] In-line docstrings updated.
    • [ ] Source for documentation at docs manually updated for new API.
    opened by crcox 14
  • [Bug] Running Test Files. Cannot Locate Test Data

    [Bug] Running Test Files. Cannot Locate Test Data

    🐛 Bug

    When running the test_csv_logger.py file, it is unable to locate the test data folder which contains the relevant test files.

    To reproduce

    pytest tests/utils/test_csv_logger.py -v

    ** Stack trace/error message **

    E           FileNotFoundError: [Errno 2] No such file or directory: 'tests/test_data/parameters.json'
    
    kale/utils/csv_logger.py:66: FileNotFoundError```
    
    ## Expected Behaviour
    
    I expected the csv_logger test to pass and for the script to be able to access the test data.
    
    ## Environment
    
    PyTorch version: 1.9.0
    Is debug build: False
    CUDA used to build PyTorch: None
    ROCM used to build PyTorch: N/A
    
    OS: macOS 10.15.7 (x86_64)
    GCC version: Could not collect
    Clang version: 12.0.0 (clang-1200.0.32.2)
    CMake version: Could not collect
    Libc version: N/A
    
    Python version: 3.8.8 (default, Apr 13 2021, 12:59:45)  [Clang 10.0.0 ] (64-bit runtime)
    Python platform: macOS-10.15.7-x86_64-i386-64bit
    Is CUDA available: False
    CUDA runtime version: No CUDA
    GPU models and configuration: No CUDA
    Nvidia driver version: No CUDA
    cuDNN version: No CUDA
    HIP runtime version: N/A
    MIOpen runtime version: N/A
    
    Versions of relevant libraries:
    [pip3] numpy==1.20.1
    [pip3] numpydoc==1.1.0
    [pip3] pytorch-lightning==1.3.8
    [pip3] pytorch-memlab==0.2.3
    [pip3] torch==1.9.0
    [pip3] torchaudio==0.9.0a0+33b2469
    [pip3] torchmetrics==0.4.1
    [pip3] torchsummary==1.5.1
    [pip3] torchvision==0.10.0
    [conda] blas                      1.0                         mkl  
    [conda] ffmpeg                    4.3                  h0a44026_0    pytorch
    [conda] mkl                       2021.2.0           hecd8cb5_269  
    [conda] mkl-service               2.3.0            py38h9ed2024_1  
    [conda] mkl_fft                   1.3.0            py38h4a7008c_2  
    [conda] mkl_random                1.2.1            py38hb2f4e1b_2  
    [conda] numpy                     1.20.1           py38hd6e1bb9_0  
    [conda] numpy-base                1.20.1           py38h585ceec_0  
    [conda] numpydoc                  1.1.0              pyhd3eb1b0_1  
    [conda] pytorch                   1.9.0                   py3.8_0    pytorch
    [conda] pytorch-lightning         1.3.8                    pypi_0    pypi
    [conda] pytorch-memlab            0.2.3                    pypi_0    pypi
    [conda] torch                     1.9.0                    pypi_0    pypi
    [conda] torchaudio                0.9.0                      py38    pytorch
    [conda] torchmetrics              0.4.1                    pypi_0    pypi
    [conda] torchsummary              1.5.1                    pypi_0    pypi
    [conda] torchvision               0.10.0                   pypi_0    pypi
    
    ## Additional context
    Add any other context about the problem here.
    
    bug 
    opened by kennedy12335 14
  • Test deep_dta

    Test deep_dta

    Description

    1. Add tests covering deepdta related modules including:
    • kale/embed/seq_nn.py
    • kale/loaddata/tdc_datasets.py
    • kale/pipeline/deep_dti.py
    • kale/predict/decode.py
    • kale/prepdata/chem_transform.py
    1. Introduce conda virtual environment in test action.

    2. Remove MNIST/MNISTM test as they are blocked due to HTTPServer error.

    Status

    Ready

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [x] New tests added to cover the changes.
    tests 
    opened by peizhenbai 12
  • Fix problems of tests for Python version 3.7, 3.8 and 3.9

    Fix problems of tests for Python version 3.7, 3.8 and 3.9

    Fixes #316.

    Description

    • Update to support 3.7 and above.
    • Update PyTorch >= 1.11.0 to fix this bug.
    • Update PyG installation on colab in deepdta example to fix this bug.
    • Increase notebook cell timeout limit from 300 to 3000 seconds to reduce timeout errors.

    Status

    Ready

    Types of changes

    • [ ] Non-breaking change (fix or new feature that would not break existing functionality).
    • [x] Breaking change (fix or new feature that would cause existing functionality to change).
    • [ ] New tests added to cover the changes.
    • [ ] In-line docstrings updated.
    • [ ] Source for documentation at docs manually updated for new API.
    dependencies 
    opened by shuo-zhou 11
  • Add notebook

    Add notebook "smoke tests" to CI

    Fixes #214

    Description

    Adds action to run PyKale tutorial notebooks as part of the existing CI testing workflow. Running as a separate action would lead to some duplication of the setup yaml, but faster execution.

    Status

    Work in progress

    On the right (delete these after selection):

    • Select a reviewer if ready for review. Use the suggested one if unsure.

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    enhancement work-in-progress tests 
    opened by bobturneruk 11
  • Reduce tests for video, fix video load, & remove binder icon

    Reduce tests for video, fix video load, & remove binder icon

    Fixes #175.

    Description

    • Reduce tests for video
    • Fix video load
    • Remove binder icon

    Status

    Ready

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [ ] Breaking change (fix or new feature that would cause existing functionality to change).
    • [ ] New tests added to cover the changes.
    • [ ] In-line docstrings updated.
    • [ ] Source for documentation at docs manually updated for new API.
    tests 
    opened by XianyuanLiu 10
  • Add a new example for video feature vector input

    Add a new example for video feature vector input

    Description

    This PR follows PR #291 and #292, upgrading DA Trainers for feature vector input. Trainers on video image input and video feature vector input have two main differences below:

    1. Three modality choices in image input: RGB, Flow and joint (RGB+Flow), due to no Audio available now; Five in feature vector: RGB, Flow, Audio, joint and all (RGB+Flow+Audio).
    2. One class type in image input: verb; Two in feature: verb, verb+noun.

    The main changes are below:

    1. Add Audio branch into the pipeline.
    2. Update optimizer work with SGD and ADAM.
    3. Add adaptive choice into source/target dataset sampling.
    4. Add JSON logger for EPIC UDA challenge.
    5. Reduce dummy network dimension in test_video_domain_adapter.

    Status

    Ready

    Types of changes

    • [ ] Non-breaking change (fix or new feature that would not break existing functionality).
    • [x] Breaking change (fix or new feature that would cause existing functionality to change).
    • [x] New tests added to cover the changes.
    • [x] In-line docstrings updated.
    • [ ] Source for documentation at docs manually updated for new API.
    new feature Stale 
    opened by XianyuanLiu 2
  • Simplify video_domain_adapter

    Simplify video_domain_adapter

    Description

    Improve video_domain_adapter.py via organizing repeated and redundant codes into BaseAdaptTrainerVideo. This PR follows PR #291 and is followed by a large PR that will merge feature vector into action_dann_lightn example.

    Status

    Ready

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [ ] Breaking change (fix or new feature that would cause existing functionality to change).
    • [x] New tests added to cover the changes.
    • [ ] In-line docstrings updated.
    • [ ] Source for documentation at docs manually updated for new API.
    enhancement Stale 
    opened by XianyuanLiu 2
  • Add feature vector dataloader

    Add feature vector dataloader

    Description

    Update action_dann_lightn example with a dataloader of feature vector input. This dataloader is used in the EPIC challenge. The trainer and models for feature vectors will be updated in the next PR after this one is merged. Coverage in video.py will be improved in the next PR because of most parts are used in training. Changes are summarized below.

    1. Add EPIC100DatasetAccess
    2. Add new hyperparameters: CLASS_TYPE, INPUT_TYPE and NUM_SEGMENTS
    3. Improve video-related trainers and ClassNetVideo for multi-class output.

    Status

    Ready

    Types of changes

    • [ ] Non-breaking change (fix or new feature that would not break existing functionality).
    • [x] Breaking change (fix or new feature that would cause existing functionality to change).
    • [x] New tests added to cover the changes.
    • [x] In-line docstrings updated.
    • [ ] Source for documentation at docs manually updated for new API.
    new feature Stale 
    opened by XianyuanLiu 2
  • Landmark uncertainty

    Landmark uncertainty

    Implementation of uncertainty estimation for landmark localisation.

    Description

    Integrates Uncertainty estimation methods including: quantile binning, error bound estimation and evaluation metrics. Provides an example file.

    Status

    Work in progress

    Types of changes

    • [x] Non-breaking change (fix or new feature that would not break existing functionality).
    • [ ] Breaking change (fix or new feature that would cause existing functionality to change).
    • [ ] New tests added to cover the changes.
    • [ ] In-line docstrings updated.
    • [ ] Source for documentation at docs manually updated for new API.
    new feature Stale 
    opened by Schobs 9
Releases(0.1.1)
  • 0.1.1(Aug 21, 2022)

    New Features

    • #338: Improve GripNet implementation
    • #339: Add setup options
    • #340: Update reading DICOM and marker visualization

    Code Improvements

    • #341: Update Colab installation and add notebook hook
    • #342: Add arguments to visualize and rename examples

    Documentation Updates

    • #337: Update GripNet example name and contributing guidelines
    • #343: Clarify python version supported
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0(Aug 11, 2022)

    New Features

    • #246: Add MIDA, CoIRLS, distribution plot, and brain example

    Bug Fixes

    • #322: Add pre-commit dependency for black and click
    • #330: Fix problems of tests for Python version 3.7, 3.8 and 3.9

    Code Improvements

    • #284: Update DICOM reading and image visualization
    • #320: Add code scanning
    • #321: Fix cardiac MRI example visualization number of columns
    • #331: Update cmr example landmark visualization

    Documentation Updates

    • #333: Update docs and readme for 0.1.0 release
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0rc5(Apr 12, 2022)

    New Features

    • #251: MFSAN support 1D input
    • #273: Add topk & multitask topk accuracies

    Bug Fixes

    • #244: Update getting indicies with torch.where
    • #254: Fix bugs for upgrading PyTroch-lightning to 1.5
    • #256 & #257: Update for PyTorch 1.10 and Torchvision 0.11.1
    • #286: Update ipython requirement from <8.0 to <9.0

    Code Improvements

    • #240: Refractor the code to save the images instead of opening them at runtime
    • #271: Fix doc build, improve docstrings and MPCA pipeline fit efficiency
    • #272: Update progress_bar for PyTorch Lightning & change 'target' abbreviation
    • #283: "val" in variable names to "valid"

    Tests

    • #258: Use pyparsing 2.4.7 in test

    Documentation Updates

    • #228: Zenodo json
    • #243: Clarify PR template
    • #282: Clarify when to request review and prefer just one label
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0rc4(Oct 13, 2021)

    Code Improvements

    • #218: Change logger in digits and action examples
    • #219: Update three notebooks
    • #222: Add multi source example
    • #224: Merge all image accesses to a unique API

    Tests

    • #221: Add notebook "smoke tests" to CI

    Documentation Updates

    • #225: Update readme & fix colab imgaug
    • #229: Add DOI to readme
    • #235: Fix typo and hyperlink
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0rc3(Sep 10, 2021)

    New Features

    • #196: Add Google Drive Download API
    • #197: Multi domain loader and office data access
    • #210: Multi-source domain adaptation SOTA

    Code Improvements

    • #201: No "extras", only "normal" or "dev" installs

    Tests

    • #178: Reduce tests for video
    • #188: Create download_path directory in conftest.py
    • #189: Create test_sampler.py and update doc for tests
    • #200: Nightly test run

    Documentation Updates

    • #165: Notebook tutorial for the bindingdb_deepdta example
    • #199: CMR PAH notebook example
    • #207: Restructure notebook tutorial docs
    • #212: Describe use of YAML

    Other Changes

    • #187: Add dependabot
    • #205: Update data dirs
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0rc2(Jun 21, 2021)

    New Features

    • #149: Add digits notebook with Binder and Colab
    • #151: Add class subset selection
    • #159: Add interpret module

    Code Improvements

    • #132: Create file download module
    • #138: Change action_domain_adapter.py to video_domain_adapter.py
    • #144: Move gait data to pykale/data
    • #157: Add concord_index calculation into DeepDTA

    Tests

    • #127: Add video_access tests
    • #134: Add tests for image and video CNNs
    • #136: Add tests for domain adapter
    • #137: Add tests for csv logger
    • #139: Add tests for isonet
    • #145: Add tests for video domain adapter
    • #150: Add tests for gripnet
    • #156: Remove empty tests and MNIST test

    Documentation Updates

    Source code(tar.gz)
    Source code(zip)
  • 0.1.0rc1(Apr 29, 2021)

    Important: Rename master to main.

    Code Improvements

    • #92: Update action domain adaptation pipeline and modules (big PR)
    • #123: Merge prep_cmr with image_transform plus tests

    Tests

    • #104: Test attention_cnn
    • #107: Do only CI test multiple python versions on Linux
    • #122: Test deep_dta

    Documentation Updates

    • #106: Update the readmes of docs, examples, and tests
    • #120: Update PR for changelog, cherry pick, and test re-run tip
    • #121: Update new logos
    • #125: Update documentation, esp. guidance on how to use pykale
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0b3(Apr 19, 2021)

    Code Improvements

    • #84: Auto assign to the default project
    • #91: MPCA pipeline
    • #93: Fix black config and rerun
    • #97: Add changelog CI and update logo

    Dependencies

    • #82: Remove requirements in examples and update setup

    Tests

    • #70: Add tests for utils.print
    • #80: Extend automated test matrix and rename lint
    • #85: Test utils logger
    • #87: Test cifar/digit_access and downgrade black
    • #90: Test mpca
    • #94: Update test guidelines

    Documentation Updates

    • #81: Docs update version and installation
    • #88: Automatically sort documented members by source order
    • #89: Disable automatic docstring inheritance from parent class
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0b2(Mar 16, 2021)

    Added

    • MPCA test
    • Test data (gait)

    Changed

    • Organisation of files and folders
    • MPCA solver to scipy SVD

    Fixed

    • Issues with MPCA implementation
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0b1(Feb 23, 2021)

    See the summary below for the major progress since 0.1.0a1. Detailed features will be described in the first official release 0.1.0.

    API-related

    • Example: BindingDB_DeepDTA added
    • Tests: a file structure and strategy set up
    • MPCA: updated to new version more compatible to pytorch
    • API refinement

    Management and quality assurance

    • Workflows: build documentation, build package, linting, pre-commit checks, release, and test
    • Setup: three options of default, extras, and dev; conda installation removed
    • Templates: issue and pull request
    • Detailed contributing guidelines
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0a1(Jan 11, 2021)

    The first release 0.1.0a1

    This is our first release, which is also available at PyPI and Anaconda Cloud. This version has most of the necessary ingredients. It will help us discuss, identify and fix issues towards a beta release, to be managed via the project boards and issues.

    Source code(tar.gz)
    Source code(zip)
Owner
PyKale
Knowledge-aware machine learning in Python
PyKale
Get 2D point positions (e.g., facial landmarks) projected on 3D mesh

points2d_projection_mesh Input 2D points (e.g. facial landmarks) on an image Camera parameters (extrinsic and intrinsic) of the image Aligned 3D mesh

5 Dec 08, 2022
A MatConvNet-based implementation of the Fully-Convolutional Networks for image segmentation

MatConvNet implementation of the FCN models for semantic segmentation This package contains an implementation of the FCN models (training and evaluati

VLFeat.org 175 Feb 18, 2022
Machine Learning Model deployment for Container (TensorFlow Serving)

try_tf_serving ├───dataset │ ├───testing │ │ ├───paper │ │ ├───rock │ │ └───scissors │ └───training │ ├───paper │ ├───rock

Azhar Rizki Zulma 5 Jan 07, 2022
[3DV 2020] PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction

PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction International Conference on 3D Vision, 2020 Sai Sagar Jinka1, Rohan

Rohan Chacko 39 Oct 12, 2022
YouRefIt: Embodied Reference Understanding with Language and Gesture

YouRefIt: Embodied Reference Understanding with Language and Gesture YouRefIt: Embodied Reference Understanding with Language and Gesture by Yixin Che

16 Jul 11, 2022
Baseline inference Algorithm for the STOIC2021 challenge.

STOIC2021 Baseline Algorithm This codebase contains an example submission for the STOIC2021 COVID-19 AI Challenge. As a baseline algorithm, it impleme

Luuk Boulogne 10 Aug 08, 2022
Neural Point-Based Graphics

Neural Point-Based Graphics Project   Video   Paper Neural Point-Based Graphics Kara-Ali Aliev1 Artem Sevastopolsky1,2 Maria Kolos1,2 Dmitry Ulyanov3

Ali Aliev 252 Dec 13, 2022
PyTorch reimplementation of REALM and ORQA

PyTorch reimplementation of REALM and ORQA

Li-Huai (Allan) Lin 17 Aug 20, 2022
Hepsiburada - Hepsiburada Urun Bilgisi Cekme

Hepsiburada Urun Bilgisi Cekme from hepsiburada import Marka nike = Marka("nike"

Ilker Manap 8 Oct 26, 2022
Lama-cleaner: Image inpainting tool powered by LaMa

Lama-cleaner: Image inpainting tool powered by LaMa

Qing 5.8k Jan 05, 2023
Using Tensorflow Object Detection API to detect Waymo open dataset

Waymo-2D-Object-Detection Using Tensorflow Object Detection API to detect Waymo open dataset Result CenterNet Training Loss SSD ResNet Training Loss C

76 Dec 12, 2022
Gradient Inversion with Generative Image Prior

Gradient Inversion with Generative Image Prior This repository is an implementation of "Gradient Inversion with Generative Image Prior", accepted to N

MLLab @ Postech 25 Jan 09, 2023
A collection of semantic image segmentation models implemented in TensorFlow

A collection of semantic image segmentation models implemented in TensorFlow. Contains data-loaders for the generic and medical benchmark datasets.

bobby 16 Dec 06, 2019
Differentiable Simulation of Soft Multi-body Systems

Differentiable Simulation of Soft Multi-body Systems Yi-Ling Qiao, Junbang Liang, Vladlen Koltun, Ming C. Lin [Paper] [Code] Updates The C++ backend s

YilingQiao 26 Dec 23, 2022
Programming with Neural Surrogates of Programs

Programming with Neural Surrogates of Programs

0 Dec 12, 2021
Official repository of IMPROVING DEEP IMAGE MATTING VIA LOCAL SMOOTHNESS ASSUMPTION.

IMPROVING DEEP IMAGE MATTING VIA LOCAL SMOOTHNESS ASSUMPTION This is the official repository of IMPROVING DEEP IMAGE MATTING VIA LOCAL SMOOTHNESS ASSU

电线杆 14 Dec 15, 2022
Use CLIP to represent video for Retrieval Task

A Straightforward Framework For Video Retrieval Using CLIP This repository contains the basic code for feature extraction and replication of results.

Jesus Andres Portillo Quintero 54 Dec 22, 2022
Machine learning, in numpy

numpy-ml Ever wish you had an inefficient but somewhat legible collection of machine learning algorithms implemented exclusively in NumPy? No? Install

David Bourgin 11.6k Dec 30, 2022
E-RAFT: Dense Optical Flow from Event Cameras

E-RAFT: Dense Optical Flow from Event Cameras This is the code for the paper E-RAFT: Dense Optical Flow from Event Cameras by Mathias Gehrig, Mario Mi

Robotics and Perception Group 71 Dec 12, 2022
Wider-Yolo Kütüphanesi ile Yüz Tespit Uygulamanı Yap

WIDER-YOLO : Yüz Tespit Uygulaması Yap Wider-Yolo Kütüphanesinin Kullanımı 1. Wider Face Veri Setini İndir Train Dataset Val Dataset Test Dataset Not:

Kadir Nar 6 Aug 22, 2022