Luminaire is a python package that provides ML driven solutions for monitoring time series data.

Overview

Luminaire

A hands-off Anomaly Detection Library

PyPI version PyPI - Python Version License build publish docs


Table of contents

What is Luminaire

Luminaire is a python package that provides ML-driven solutions for monitoring time series data. Luminaire provides several anomaly detection and forecasting capabilities that incorporate correlational and seasonal patterns as well as uncontrollable variations in the data over time.

Quick Start

Install Luminaire from PyPI using pip

pip install luminaire

Import luminaire module in python

import luminaire

Check out Luminaire documentation for detailed description of methods and usage.

Time Series Outlier Detection Workflow

Luminaire Flow

Luminaire outlier detection workflow can be divided into 3 major components:

Data Preprocessing and Profiling Component

This component can be called to prepare a time series prior to training an anomaly detection model on it. This step applies a number of methods that make anomaly detection more accurate and reliable, including missing data imputation, identifying and removing recent outliers from training data, necessary mathematical transformations, and data truncation based on recent change points. It also generates profiling information (historical change points, trend changes, etc.) that are considered in the training process.

Profiling information for time series data can be used to monitor data drift and irregular long-term swings.

Modeling Component

This component performs time series model training based on the user-specified configuration OR optimized configuration (see Luminaire hyperparameter optimization). Luminaire model training is integrated with different structural time series models as well as filtering based models. See Luminaire outlier detection for more information.

The Luminaire modeling step can be called after the data preprocessing and profiling step to perform necessary data preparation before training.

Configuration Optimization Component

Luminaire's integration with configuration optimization enables a hands-off anomaly detection process where the user needs to provide very minimal configuration for monitoring any type of time series data. This step can be combined with the preprocessing and modeling for any auto-configured anomaly detection use case. See fully automatic outlier detection for a detailed walkthrough.

Anomaly Detection for High Frequency Time Series

Luminaire can also monitor a set of data points over windows of time instead of tracking individual data points. This approach is well-suited for streaming use cases where sustained fluctuations are of greater concern than individual fluctuations. See anomaly detection for streaming data for detailed information.

Contributing

Want to help improve Luminaire? Check out our contributing documentation.

Citing

Please cite the following article if Luminaire is used for any research purpose or scientific publication:

Chakraborty, S., Shah, S., Soltani, K., Swigart, A., Yang, L., & Buckingham, K. (2020, December). Building an Automated and Self-Aware Anomaly Detection System. In 2020 IEEE International Conference on Big Data (Big Data) (pp. 1465-1475). IEEE. (arxiv link)

Other Useful Resources

  • Chakraborty, S., Shah, S., Soltani, K., & Swigart, A. (2019, December). Root Cause Detection Among Anomalous Time Series Using Temporal State Alignment. In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA) (pp. 523-528). IEEE. (arxiv link)

Blogs

Development Team

Luminaire is developed and maintained by Sayan Chakraborty, Smit Shah, Kiumars Soltani, Luyao Yang, Anna Swigart, Kyle Buckingham and many other contributors from the Zillow Group A.I. team.

Comments
  • bug #112: window size identification fixed for trend change detection

    bug #112: window size identification fixed for trend change detection

    The current approach for trend detection in the Data exploration module (/luminaire/exploration/data_exploration.py) was enabled only for daily ('D') and hourly ('H) time series. I added a fix to trigger the computation of window sizes for weekly ('W') frequency, using a value of 4. Also, I added a fix to support all the other frequencies.

    opened by papaemman 9
  • Unable to call score function, error:

    Unable to call score function, error: "setting an array element with a sequence"

    We met the problem when we tried to call the score function in WindowDensity API (Luminaire Libary). The error message was "setting an array element with a sequence". We searched online and asked for other professionals' experience but still failed to solve it. Can anybody help us with it? Thanks in advance~~

    Luminaire Reference: https://zillow.github.io/luminaire/_modules/luminaire/model/window_density.html#WindowDensityHyperParams

    1 2 3 5 6 7

    opened by vickeywangvw 7
  • DataExploration.profile results in

    DataExploration.profile results in "ErrorMessage': "unsupported operand type(s) for -: 'int' and 'NoneType'"

    Hey all!

    I'm trying to use the package but I'm getting that message.

    import luminaire
    import pandas as pd
    
    from luminaire.exploration.data_exploration import DataExploration
    
    past = pd.read_csv("dataset.csv").set_index("index")
    
    de = DataExploration(freq='D')
    
    past_prof, profile = de.profile(df=past)
    #(None,
    #{'success': False,
    # 'ErrorMessage': "unsupported operand type(s) for -: 'int' and 'NoneType'"})
    

    image

    Is that anything data-related?

    Here are my infos:

    • Python 3.7.10
    • requirements.txt: see below, result from pip install -U jupyterlab numpy pandas matplotlib luminaire pip setuptools pyarrow

    Thanks!


    anyio==3.6.1
    appnope==0.1.3
    argon2-cffi==21.3.0
    argon2-cffi-bindings==21.2.0
    attrs==22.1.0
    Babel==2.10.3
    backcall==0.2.0
    beautifulsoup4==4.11.1
    bleach==5.0.1
    boto3==1.24.76
    botocore==1.27.76
    certifi==2022.9.14
    cffi==1.15.1
    changepy==0.3.1
    charset-normalizer==2.1.1
    cloudpickle==2.2.0
    cycler==0.11.0
    debugpy==1.6.3
    decorator==5.1.1
    defusedxml==0.7.1
    entrypoints==0.4
    fastjsonschema==2.16.2
    fonttools==4.37.2
    future==0.18.2
    hyperopt==0.2.7
    idna==3.4
    importlib-metadata==4.12.0
    importlib-resources==5.9.0
    ipykernel==6.15.3
    ipython==7.34.0
    ipython-genutils==0.2.0
    jedi==0.18.1
    Jinja2==3.1.2
    jmespath==1.0.1
    joblib==1.2.0
    json5==0.9.10
    jsonschema==4.16.0
    jupyter-core==4.11.1
    jupyter-server==1.18.1
    jupyter_client==7.3.5
    jupyterlab==3.4.7
    jupyterlab-pygments==0.2.2
    jupyterlab_server==2.15.1
    kiwisolver==1.4.4
    luminaire==0.4.0
    lxml==4.9.1
    MarkupSafe==2.1.1
    matplotlib==3.5.3
    matplotlib-inline==0.1.6
    mistune==2.0.4
    nbclassic==0.4.3
    nbclient==0.6.8
    nbconvert==7.0.0
    nbformat==5.5.0
    nest-asyncio==1.5.5
    networkx==2.6.3
    notebook==6.4.12
    notebook-shim==0.1.0
    numpy==1.21.6
    packaging==21.3
    pandas==1.3.5
    pandas-redshift==2.0.5
    pandocfilters==1.5.0
    parso==0.8.3
    patsy==0.5.2
    pexpect==4.8.0
    pickleshare==0.7.5
    Pillow==9.2.0
    pkgutil_resolve_name==1.3.10
    prometheus-client==0.14.1
    prompt-toolkit==3.0.31
    psutil==5.9.2
    psycopg2-binary==2.9.3
    ptyprocess==0.7.0
    py4j==0.10.9.7
    pyarrow==9.0.0
    pycparser==2.21
    Pygments==2.13.0
    pykalman==0.9.5
    pyparsing==3.0.9
    pyrsistent==0.18.1
    python-dateutil==2.8.2
    pytz==2022.2.1
    pyzmq==24.0.0
    requests==2.28.1
    s3transfer==0.6.0
    scikit-learn==1.0.2
    scipy==1.7.3
    Send2Trash==1.8.0
    six==1.16.0
    sniffio==1.3.0
    soupsieve==2.3.2.post1
    statsmodels==0.13.2
    terminado==0.15.0
    threadpoolctl==3.1.0
    tinycss2==1.1.1
    tomli==2.0.1
    tornado==6.2
    tqdm==4.64.1
    traitlets==5.4.0
    typing_extensions==4.3.0
    urllib3==1.26.12
    wcwidth==0.2.5
    webencodings==0.5.1
    websocket-client==1.4.1
    zipp==3.8.1
    
    opened by paulochf 5
  • Related to issue #112: Exploration failure for weekly data

    Related to issue #112: Exploration failure for weekly data

    The current approach for trend turning was enabled only for daily and hourly time series. Added a quick fix to trigger computation of window sizes for other frequency types.

    opened by sayanchk 5
  • Diff order fix

    Diff order fix

    Corrected issue where the diff order was hard coded as 2 in lad_filtering. Also added test_lad_filtering_scoring_diff_order to test_models which uses the last data points, takes the appropriate diff, and then compares to the adjusted actual to make sure the appropriate diff order is applied.

    Related Issue: #120 @sayanchk for review

    opened by pdurham2 4
  • Failproof project setup

    Failproof project setup

    I guess python 3.7 and later considered not supported (see https://github.com/zillow/luminaire/runs/1946332964)

    On python 3.6 pyramid-arima wheel build will fail (but it will not affect the installation of dependency - just produce log noise) without a numpy installed, but it looks like it's not required to actually have it as dependecy - see https://github.com/zillow/luminaire/pull/74

    P.S. https://pip.pypa.io/en/latest/reference/pip_install/#controlling-setup-requires There is a warning about how dangerous to use this keyword, but i guess it's ok for such simple case It's also used in https://github.com/zillow/luminaire/pull/77/

    opened by Aristarhys 4
  • Switch to sphinx-material theme

    Switch to sphinx-material theme

    No actual content change in the documentation.

    • Replaced the incomplete sphinx theme with a more polished one, along with corresponding stylesheets
    • Fixed some indentation issues in the docs
    • Shuffled files around: removed dedicated TOC pages and added them all in the index instead

    Screenshot of the home page: image

    Here's a second screenshot that shows syntax highlighting and footer (closes #40) image

    @sayanchk you might want to look into shortening the page titles for the API ref (or just name them after the modules)

    opened by snazzyfox 4
  • Unable to use data exploration

    Unable to use data exploration "The training data observed continuous missing data near the end. Require more stable data to train"

    I have tried to use simple data and its giving these issues

    Here is the notebook https://colab.research.google.com/drive/19muQTHoWxdh5fC1DQE2FpYu763fn-0zC?usp=sharing

    opened by eaglewarrior 3
  • Force linter to fail ci check

    Force linter to fail ci check

    exit 1 will will called only if first flake8 will fail and return non zero code from script block immediately

    Before last command of script block was evaluated and second flake8 invocation was always returning 0 because of flag passed

    bug meta 
    opened by Aristarhys 3
  • Add test runner and linter support for setup.py

    Add test runner and linter support for setup.py

    I think it will be worth to have local means of running tests/lint even if you have CI perfectly working (python setup.py test, python setup.py flake8) I used config from https://github.com/zillow/luminaire/blob/master/.github/workflows/python-app.yml#L42 for flake Flake gonna nuke the integration at some point, but i guess it's ok for now (we can use last version without this warning - it's not that old)

    https://gitlab.com/pycqa/flake8/-/issues/544

    opened by Aristarhys 3
  • Missing data or second level

    Missing data or second level

    Hi there,

    I have a question rather than any specific issues. I wonder if this library can work with missing points/date during training stage? and what about anomaly detection at seconds level data? I will appreciate your response

    question 
    opened by soroosh-rz 2
  • diff_order seems to be hard coded to be diff order of 2

    diff_order seems to be hard coded to be diff order of 2

    When diff_order is applied in lad_filtering.py, the value passed to np.diff is fixed as 2. Is this intended or should diff_order be passed instead?

    if diff_order:
      actual_previous_per_diff = [interpolated_actual_previous[-1]] \
          if diff_order == 1 else [interpolated_actual_previous[-1], np.diff(interpolated_actual_previous)[0]]
      seq_tail = interpolated_actual_previous + [interpolated_actual]
      interpolated_actual = np.diff(seq_tail, 2)[-1]
    
    bug 
    opened by pdurham2 2
  • Optimize _detect_window_size within DataExploration for weekly data

    Optimize _detect_window_size within DataExploration for weekly data

    _detect_window_size is currently not optimized for weekly time series data in order to detect the most frequent periodic pattern. This issue need some investigation on that front. Reference: https://github.com/zillow/luminaire/pull/114

    Note: This method is a dependency for Structural, Filtering and Window based models. Therefore, any change in this method requires testing on any existing supported (or rather optimized on) time series data types (daily, hourly and even higher frequencies). Please refer to the datasets for testing.

    help wanted 
    opened by sayanchk 0
  • Fix the repo with all the linter based warning

    Fix the repo with all the linter based warning

    The repo has linter running but there are quite some warnings which are not breaking but needs to be resolved.

    Example pipeline: https://github.com/zillow/luminaire/runs/5104474237?check_suite_focus=true

    11    C901 'DataExploration._detrender' is too complex (13)
    7     E122 continuation line missing indentation or outdented
    12    E127 continuation line over-indented for visual indent
    29    E128 continuation line under-indented for visual indent
    2     E[203](https://github.com/zillow/luminaire/runs/5104474237?check_suite_focus=true#step:6:203) whitespace before ':'
    2     E225 missing whitespace around operator
    2     E231 missing whitespace after ','
    3     E266 too many leading '#' for block comment
    22    E302 expected 2 blank lines, found 1
    10    E303 too many blank lines (2)
    9     E501 line too long (134 > 127 characters)
    1     E714 test for object identity should be 'is not'
    2     E722 do not use bare 'except'
    16    F401 'luminaire.optimization' imported but unused
    5     F403 'from luminaire.exploration.data_exploration import *' used; unable to detect undefined names
    34    F405 'DataExploration' may be undefined, or defined from star imports: luminaire.exploration.data_exploration
    1     F841 local variable 'e' is assigned to but never used
    1     W291 trailing whitespace
    3     W292 no newline at end of file
    4     W293 blank line contains whitespace
    1     W391 blank line at end of file
    
    bug 
    opened by shahsmit14 0
  • Extracting time series components dataframe

    Extracting time series components dataframe

    Hello!

    Is there any way to extract the dataframes containing the decomposition of the time series? That is, one column for the trend, another for the seasonality, etc.

    Thanks

    question 
    opened by lventosa 1
  • Unable to profile data

    Unable to profile data

    Hello I have the following data frame. image

    I am calling it using imputed_data, pre_prc = de_obj.profile(hourly, impute_only=True)

    and getting the following error. {'success': False, 'ErrorMessage': "ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''"}

    I have been trying to figure it out, but no avail. Any help would be much appreciated. Thanks!

    wontfix 
    opened by grechasneak 1
Releases(v0.4.2)
  • v0.4.2(Nov 23, 2022)

  • v0.4.1(Oct 7, 2022)

    Changes:

    • Leveraging https://pypi.org/project/bayescd/ instead of explicitly installing that same repos code for ci/cd since its now available on PyPI
    • Adding bayescd to dependency so users don't have to install them manually
    • Support for weekly frequency for data exploration

    Closes issues:

    • https://github.com/zillow/luminaire/issues/112
    • https://github.com/zillow/luminaire/issues/115
    • https://github.com/zillow/luminaire/pull/118
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Jul 26, 2022)

    • Support up to Python 3.10
    • Support of latest versions of scipy, statsmodels and bayesian-changepoin-detection
    • Minor bug fixes and improvements in data exploration
    • Minor bug fixes and improvements in structural model
    • Ability to perform model validation due to under-fit added in structural model
    • Holiday list updated

    Note: We had to remove bayesian-changepoint-detection package from requirements due to deployment issues in pypi (the latest version of scipy is not supported by bayesian-changepoint-detection 0.2.dev1 available in PyPI). If you are planning to use this luminaire v0.4.0, you have to manually install a compatible version of bayesian-changepoint-detection from github provided by the community but not yet made available on PyPI using the following script:

    pip install git+https://github.com/hildensia/bayesian_changepoint_detection@2dd95f5c1d028116899a842ccb3baa173f9d5be9#egg=bayesian-changepoint-detection

    Source code(tar.gz)
    Source code(zip)
  • 0.4.0.dev3(Mar 17, 2022)

    Luminaire cd fixes from dev2

    Release notes from dev1:

    • Support up to Python 3.10
    • Support of latest versions of Scipy, Statsmodels and bayesian-changepoin-detection
    • Minor bug fixes and improvements in data exploration
    • Minor bug fixes and improvements in structural model
    • Ability to perform model validation due to underfit added in structural model
    • Holiday list updated

    Please read: We had to remove bayesian-changepoint-detection package from requirements due to deployment issues in pypi (the latest version of scipy is not supported by bayesian-changepoint-detection 0.2.dev1). If you are planning to use this dev release of luminaire, you have to manually install a compatible version of bayesian-changepoint-detection from github using the following script:

    pip install git+https://github.com/hildensia/bayesian_changepoint_detection@2dd95f5c1d028116899a842ccb3baa173f9d5be9#egg=bayesian-changepoint-detection
    
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0.dev2(Mar 8, 2022)

    Luminaire cd fixes from dev1

    Release notes from dev1:

    • Support up to Python 3.10
    • Support of latest versions of Scipy, Statsmodels and bayesian-changepoin-detection
    • Minor bug fixes and improvements in data exploration
    • Minor bug fixes and improvements in structural model
    • Ability to perform model validation due to underfit added in structural model
    • Holiday list updated
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0.dev1(Mar 8, 2022)

    • Support up to Python 3.10
    • Support of latest versions of Scipy, Statsmodels and bayesian-changepoin-detection
    • Minor bug fixes and improvements in data exploration
    • Minor bug fixes and improvements in structural model
    • Ability to perform model validation due to underfit added in structural model
    • Holiday list updated
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Dec 2, 2021)

    Update the requirements files list:

    • The existing requirements file specifies hard version requirements which is not helpful to the user
    • We ran the existing test cases to see what all recent version of dependent packages can be supported and based on that update the requirements files list
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0.dev1(Nov 22, 2021)

    Support for python 3.7 for build and deploy

    • Major dependent packages are kept the same
    • Just making the code compatible with Python 3.7 as Python 3.6 is reaching End of Life

    Note: Major package upgrade is planned for Q1-2022.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.4(Oct 5, 2021)

  • v0.2.3(Aug 12, 2021)

    Bug fixes:

    • Data reindexing while imputation fixed at the presence of missing / invalid data

    Scoring logic updates:

    • Model uncertainty is taken into consideration while making stationarity adjustments while scoring WindowDensityModel
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Jul 27, 2021)

  • v0.2.1(Jun 9, 2021)

  • v0.2.0(Feb 23, 2021)

    • WindowDensity model improvements for streaming and high-frequency time series
    • Full automation in training and scoring the window density model
    • Minor version upgrades for package dependencies (more on the way!)
    • Bugfixes
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0.dev1(Feb 12, 2021)

    Dev release for v0.2.0.

    This release includes the following:

    • Improved WindowDensity modeling for streaming use cases.
    • Bringing automation in configuring window density model for streaming use cases.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.4(Nov 24, 2020)

  • v0.1.3(Aug 25, 2020)

  • v0.1.1(Aug 23, 2020)

  • v0.1.0(Aug 21, 2020)

    Making Luminaire available as a beta release.

    Details:

    • Core Luminaire code base
    • Documentation:
      • Readme https://github.com/zillow/luminaire/blob/master/README.md
      • Github pages https://zillow.github.io/luminaire/
    • CI/CD pipeline workflow for build, release and documents
    • Improved code/files organization
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0.dev8(Aug 20, 2020)

  • v0.1.0.dev7(Aug 19, 2020)

  • v0.1.0.dev6.2(Aug 17, 2020)

  • v0.1.0.dev6.0(Aug 17, 2020)

  • v0.1.0.dev5.2(Aug 17, 2020)

  • v0.1.0.dev5(Aug 17, 2020)

  • v0.1.0.dev6.1(Aug 17, 2020)

  • v0.1.0.dev6(Aug 17, 2020)

  • v0.1.0.dev5.1(Aug 17, 2020)

  • v0.1.0.dev4(Aug 15, 2020)

  • v0.1.0.dev3(Aug 15, 2020)

  • v0.1.0.dev2(Aug 14, 2020)

[ICCV 2021] Our work presents a novel neural rendering approach that can efficiently reconstruct geometric and neural radiance fields for view synthesis.

MVSNeRF Project page | Paper This repository contains a pytorch lightning implementation for the ICCV 2021 paper: MVSNeRF: Fast Generalizable Radiance

Anpei Chen 529 Dec 30, 2022
A PyTorch library and evaluation platform for end-to-end compression research

CompressAI CompressAI (compress-ay) is a PyTorch library and evaluation platform for end-to-end compression research. CompressAI currently provides: c

InterDigital 680 Jan 06, 2023
Super-Fast-Adversarial-Training - A PyTorch Implementation code for developing super fast adversarial training

Super-Fast-Adversarial-Training This is a PyTorch Implementation code for develo

LBK 26 Dec 02, 2022
A Pytorch implementation of CVPR 2021 paper "RSG: A Simple but Effective Module for Learning Imbalanced Datasets"

RSG: A Simple but Effective Module for Learning Imbalanced Datasets (CVPR 2021) A Pytorch implementation of our CVPR 2021 paper "RSG: A Simple but Eff

120 Dec 12, 2022
Streamlit Tutorial (ex: stock price dashboard, cartoon-stylegan, vqgan-clip, stylemixing, styleclip, sefa)

Streamlit Tutorials Install pip install streamlit Run cd [directory] streamlit run app.py --server.address 0.0.0.0 --server.port [your port] # http:/

Jihye Back 30 Jan 06, 2023
paper: Hyperspectral Remote Sensing Image Classification Using Deep Convolutional Capsule Network

DC-CapsNet This is a tensorflow and keras based implementation of DC-CapsNet for HSI in the Remote Sensing Letters R. Lei et al., "Hyperspectral Remot

LEI 7 Nov 29, 2022
Build Graph Nets in Tensorflow

Graph Nets library Graph Nets is DeepMind's library for building graph networks in Tensorflow and Sonnet. Contact DeepMind 5.2k Jan 05, 2023

CTRL-C: Camera calibration TRansformer with Line-Classification

CTRL-C: Camera calibration TRansformer with Line-Classification This repository contains the official code and pretrained models for CTRL-C (Camera ca

57 Nov 14, 2022
Open-source python package for the extraction of Radiomics features from 2D and 3D images and binary masks.

pyradiomics v3.0.1 Build Status Linux macOS Windows Radiomics feature extraction in Python This is an open-source python package for the extraction of

Artificial Intelligence in Medicine (AIM) Program 842 Dec 28, 2022
Efficient Training of Audio Transformers with Patchout

PaSST: Efficient Training of Audio Transformers with Patchout This is the implementation for Efficient Training of Audio Transformers with Patchout Pa

165 Dec 26, 2022
Code accompanying "Evolving spiking neuron cellular automata and networks to emulate in vitro neuronal activity," accepted to IEEE SSCI ICES 2021

Evolving-spiking-neuron-cellular-automata-and-networks-to-emulate-in-vitro-neuronal-activity Code accompanying "Evolving spiking neuron cellular autom

SOCRATES: Self-Organizing Computational substRATES 2 Dec 02, 2022
Repository for the COLING 2020 paper "Explainable Automated Fact-Checking: A Survey."

Explainable Fact Checking: A Survey This repository and the accompanying webpage contain resources for the paper "Explainable Fact Checking: A Survey"

Neema Kotonya 42 Nov 17, 2022
A computer vision pipeline to identify the "icons" in Christian paintings

Christian-Iconography A computer vision pipeline to identify the "icons" in Christian paintings. A bit about iconography. Iconography is related to id

Rishab Mudliar 3 Jul 30, 2022
Implementation of the SUMO (Slim U-Net trained on MODA) model

SUMO - Slim U-Net trained on MODA Implementation of the SUMO (Slim U-Net trained on MODA) model as described in: TODO: add reference to paper once ava

6 Nov 19, 2022
Versatile Generative Language Model

Versatile Generative Language Model This is the implementation of the paper: Exploring Versatile Generative Language Model Via Parameter-Efficient Tra

Zhaojiang Lin 17 Dec 02, 2022
ByteTrack超详细教程!训练自己的数据集&&摄像头实时检测跟踪

ByteTrack超详细教程!训练自己的数据集&&摄像头实时检测跟踪

Double-zh 45 Dec 19, 2022
Explainable Zero-Shot Topic Extraction

Zero-Shot Topic Extraction with Common-Sense Knowledge Graph This repository contains the code for reproducing the results reported in the paper "Expl

D2K Lab 56 Dec 14, 2022
ParaGen is a PyTorch deep learning framework for parallel sequence generation

ParaGen is a PyTorch deep learning framework for parallel sequence generation. Apart from sequence generation, ParaGen also enhances various NLP tasks, including sequence-level classification, extrac

Bytedance Inc. 169 Dec 22, 2022
Conjugated Discrete Distributions for Distributional Reinforcement Learning (C2D)

Conjugated Discrete Distributions for Distributional Reinforcement Learning (C2D) Code & Data Appendix for Conjugated Discrete Distributions for Distr

1 Jan 11, 2022
Implementation for the paper: Invertible Denoising Network: A Light Solution for Real Noise Removal (CVPR2021).

Invertible Image Denoising This is the PyTorch implementation of paper: Invertible Denoising Network: A Light Solution for Real Noise Removal (CVPR 20

157 Dec 25, 2022