Build Low Code Automated Tensorflow, What-IF explainable models in just 3 lines of code.

Overview

Downloads Generic badge Generic badge example workflow Open issues

Auto Tensorflow - Mission:

Build Low Code Automated Tensorflow, What-IF explainable models in just 3 lines of code.

To make Deep Learning on Tensorflow absolutely easy for the masses with its low code framework and also increase trust on ML models through What-IF model explainability.

Under the hood:

Built on top of the powerful Tensorflow ecosystem tools like TFX , TF APIs and What-IF Tool , the library automatically does all the heavy lifting internally like EDA, schema discovery, feature engineering, HPT, model search etc. This empowers developers to focus only on building end user applications quickly without any knowledge of Tensorflow, ML or debugging. Built for handling large volume of data / BigData - using only TF scalable components. Moreover the models trained with auto-tensorflow can directly be deployed on any cloud like GCP / AWS / Azure.

Official Launch: https://youtu.be/sil-RbuckG0

Features:

  1. Build Classification / Regression models on CSV data
  2. Automated Schema Inference
  3. Automated Feature Engineering
    • Discretization
    • Scaling
    • Normalization
    • Text Embedding
    • Category encoding
  4. Automated Model build for mixed data types( Continuous, Categorical and Free Text )
  5. Automated Hyper-parameter tuning
  6. Automated GPU Distributed training
  7. Automated UI based What-IF analysis( Fairness, Feature Partial dependencies, What-IF )
  8. Control over complexity of model
  9. No dependency over Pandas / SKLearn
  10. Can handle dataset of any size - including multiple CSV files

Tutorials:

  1. Open In Colab - Auto Classification on CSV data
  2. Open In Colab - Auto Regression on CSV data

Setup:

  1. Install library
    • PIP(Recommended): pip install auto-tensorflow
    • Nightly: pip install git+https://github.com/rafiqhasan/auto-tensorflow.git
  2. Works best on UNIX/Linux/Debian/Google Colab/MacOS

Usage:

  1. Initialize TFAuto Engine
from auto_tensorflow.tfa import TFAuto
tfa = TFAuto(train_data_path='/content/train_data/', test_data_path='/content/test_data/', path_root='/content/tfauto')
  1. Step 1 - Automated EDA and Schema discovery
tfa.step_data_explore(viz=True) ##Viz=False for no visualization
  1. Step 2 - Automated ML model build and train
tfa.step_model_build(label_column = 'price', model_type='REGRESSION', model_complexity=1)
  1. Step 3 - Automated What-IF Tool launch
tfa.step_model_whatif()

API Arguments:

  • Method TFAuto

    • train_data_path: Path where training data is stored
    • test_data_path: Path where Test / Eval data is stored
    • path_root: Directory for running TFAuto( Directory should NOT exist )
  • Method step_data_explore

    • viz: Is data visualization required ? - True or False( Default )
  • Method step_model_build

    • label_column: The feature to be used as Label
    • model_type: Either of 'REGRESSION'( Default ), 'CLASSIFICATION'
    • model_complexity:
      • 0 : Model with default hyper-parameters
      • 1 (Default): Model with automated hyper-parameter tuning
      • 2 : Complexity 1 + Advanced fine-tuning of Text layers

Current limitations:

There are a few limitations in the initial release but we are working day and night to resolve these and add them as future features.

  1. Doesn't support Image / Audio data

Future roadmap:

  1. Add support for Timeseries / Audio / Image data
  2. Add feature to download full pipeline model Python code for advanced tweaking

Release History:

1.3.2 - 27/11/2021 - Release Notes

1.3.1 - 18/11/2021 - Release Notes

1.2.0 - 24/07/2021 - Release Notes

1.1.1 - 14/07/2021 - Release Notes

1.0.1 - 07/07/2021 - Release Notes

Comments
  • Failed to install 1.2.0

    Failed to install 1.2.0

    Describe the bug Does not resolve dependency 👍 Show error when I run; pip install auto-tensorflow I got this message: Could not find a version that matches keras-nightly~=2.5.0.dev

    To Reproduce Steps to reproduce the behavior: pip install auto-tensorflow Expected behavior Install auto-tensorflow

    Versions:

    • Auto-Tensorflow:1.2.0
    • Tensorflow:
    • Tensorflow-Extended:

    Additional context Add any other context about the problem here.

    wontfix 
    opened by HenrryVargas 8
  • Colab Regression Example No Longer Working?

    Colab Regression Example No Longer Working?

    Trying to run the Colab Regression notebook. All dependencies get installed, I Restart and Run All to start the code. It errors out here:

    ##Step 1
    ##Run Data setup -> Infer Schema, find anomalies, create profile and show viz
    tfa.step_data_explore(viz=False)
    
    Data: Pipeline execution started...
    WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.
    WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
    WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
    ERROR:absl:Execution 2 failed.
    ---------------------------------------------------------------------------
    TypeCheckError                            Traceback (most recent call last)
    [<ipython-input-6-7e17a616f197>](https://localhost:8080/#) in <module>
          1 ##Step 1
          2 ##Run Data setup -> Infer Schema, find anomalies, create profile and show viz
    ----> 3 tfa.step_data_explore(viz=False)
    
    14 frames
    [/usr/local/lib/python3.7/dist-packages/auto_tensorflow/tfa.py](https://localhost:8080/#) in step_data_explore(self, viz)
       1216     Viz: (False) Is data visualization required ?
       1217     '''
    -> 1218     self.pipeline = self.tfadata.run_initial(self._train_data_path, self._test_data_path, self._tfx_root, self._metadata_db_root, self.tfautils, viz)
       1219     self.generate_config_json()
       1220 
    
    [/usr/local/lib/python3.7/dist-packages/auto_tensorflow/tfa.py](https://localhost:8080/#) in run_initial(self, _train_data_path, _test_data_path, _tfx_root, _metadata_db_root, tfautils, viz)
        211     #Run data pipeline
        212     print("Data: Pipeline execution started...")
    --> 213     LocalDagRunner().run(self.pipeline)
        214     self._run = True
        215 
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/tfx_runner.py](https://localhost:8080/#) in run(self, pipeline)
         76     c = compiler.Compiler()
         77     pipeline_pb = c.compile(pipeline)
    ---> 78     return self.run_with_ir(pipeline_pb)
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/local/local_dag_runner.py](https://localhost:8080/#) in run_with_ir(self, pipeline)
         85           with metadata.Metadata(connection_config) as mlmd_handle:
         86             partial_run_utils.snapshot(mlmd_handle, pipeline)
    ---> 87         component_launcher.launch()
         88         logging.info('Component %s is finished.', node_id)
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/launcher.py](https://localhost:8080/#) in launch(self)
        543               executor_watcher.address)
        544           executor_watcher.start()
    --> 545         executor_output = self._run_executor(execution_info)
        546       except Exception as e:  # pylint: disable=broad-except
        547         execution_output = (
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/launcher.py](https://localhost:8080/#) in _run_executor(self, execution_info)
        418     outputs_utils.make_output_dirs(execution_info.output_dict)
        419     try:
    --> 420       executor_output = self._executor_operator.run_executor(execution_info)
        421       code = executor_output.execution_result.code
        422       if code != 0:
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/beam_executor_operator.py](https://localhost:8080/#) in run_executor(self, execution_info, make_beam_pipeline_fn)
         96         make_beam_pipeline_fn=make_beam_pipeline_fn)
         97     executor = self._executor_cls(context=context)
    ---> 98     return python_executor_operator.run_with_executor(execution_info, executor)
    
    [/usr/local/lib/python3.7/dist-packages/tfx/orchestration/portable/python_executor_operator.py](https://localhost:8080/#) in run_with_executor(execution_info, executor)
         57   output_dict = copy.deepcopy(execution_info.output_dict)
         58   result = executor.Do(execution_info.input_dict, output_dict,
    ---> 59                        execution_info.exec_properties)
         60   if not result:
         61     # If result is not returned from the Do function, then try to
    
    [/usr/local/lib/python3.7/dist-packages/tfx/components/statistics_gen/executor.py](https://localhost:8080/#) in Do(self, input_dict, output_dict, exec_properties)
        138             stats_api.GenerateStatistics(stats_options)
        139             | 'WriteStatsOutput[%s]' % split >>
    --> 140             stats_api.WriteStatisticsToBinaryFile(output_path))
        141         logging.info('Statistics for split %s written to %s.', split,
        142                      output_uri)
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py](https://localhost:8080/#) in __or__(self, ptransform)
        135 
        136   def __or__(self, ptransform):
    --> 137     return self.pipeline.apply(ptransform, self)
        138 
        139 
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/pipeline.py](https://localhost:8080/#) in apply(self, transform, pvalueish, label)
        651     if isinstance(transform, ptransform._NamedPTransform):
        652       return self.apply(
    --> 653           transform.transform, pvalueish, label or transform.label)
        654 
        655     if not isinstance(transform, ptransform.PTransform):
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/pipeline.py](https://localhost:8080/#) in apply(self, transform, pvalueish, label)
        661       old_label, transform.label = transform.label, label
        662       try:
    --> 663         return self.apply(transform, pvalueish)
        664       finally:
        665         transform.label = old_label
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/pipeline.py](https://localhost:8080/#) in apply(self, transform, pvalueish, label)
        710 
        711       if type_options is not None and type_options.pipeline_type_check:
    --> 712         transform.type_check_outputs(pvalueish_result)
        713 
        714       for tag, result in ptransform.get_named_nested_pvalues(pvalueish_result):
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/transforms/ptransform.py](https://localhost:8080/#) in type_check_outputs(self, pvalueish)
        464 
        465   def type_check_outputs(self, pvalueish):
    --> 466     self.type_check_inputs_or_outputs(pvalueish, 'output')
        467 
        468   def type_check_inputs_or_outputs(self, pvalueish, input_or_output):
    
    [/usr/local/lib/python3.7/dist-packages/apache_beam/transforms/ptransform.py](https://localhost:8080/#) in type_check_inputs_or_outputs(self, pvalueish, input_or_output)
        495                 hint=hint,
        496                 actual_type=pvalue_.element_type,
    --> 497                 debug_str=type_hints.debug_str()))
        498 
        499   def _infer_output_coder(self, input_type=None, input_coder=None):
    
    TypeCheckError: Output type hint violation at WriteStatsOutput[train]: expected <class 'apache_beam.pvalue.PDone'>, got <class 'str'>
    Full type hint:
    IOTypeHints[inputs=((<class 'tensorflow_metadata.proto.v0.statistics_pb2.DatasetFeatureStatisticsList'>,), {}), outputs=((<class 'apache_beam.pvalue.PDone'>,), {})]
    File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
    File "<frozen importlib._bootstrap_external>", line 728, in exec_module
    File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
    File "/usr/local/lib/python3.7/dist-packages/tensorflow_data_validation/api/stats_api.py", line 113, in <module>
        class WriteStatisticsToBinaryFile(beam.PTransform):
    File "/usr/local/lib/python3.7/dist-packages/apache_beam/typehints/decorators.py", line 776, in annotate_input_types
        *converted_positional_hints, **converted_keyword_hints)
    
    based on:
      IOTypeHints[inputs=None, outputs=((<class 'apache_beam.pvalue.PDone'>,), {})]
      File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 728, in exec_module
      File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
      File "/usr/local/lib/python3.7/dist-packages/tensorflow_data_validation/api/stats_api.py", line 113, in <module>
          class WriteStatisticsToBinaryFile(beam.PTransform):
      File "/usr/local/lib/python3.7/dist-packages/apache_beam/typehints/decorators.py", line 863, in annotate_output_types
          f._type_hints = th.with_output_types(return_type_hint)  # pylint: disable=protected-access
    
    opened by windowshopr 2
  • Dump when training Text column model on GPUs

    Dump when training Text column model on GPUs

    Describe the bug The model dumps with error when training a model on GPU runtime

    To Reproduce Train a model with Free text column on GPU device

    Expected behavior Should not give any error

    Versions:

    • Auto-Tensorflow: 1.0.1
    • Tensorflow: 2.5.0
    • Tensorflow-Extended: 0.29.0

    Additional context Add any other context about the problem here.

    bug 
    opened by rafiqhasan 2
  • Add automated - advanced feature engineering

    Add automated - advanced feature engineering

    Is your feature request related to a problem? Please describe. Yes

    Describe the solution you'd like Add more feature engineering options for automated consideration:

    1. Squared
    2. Square root
    3. Min-Max scaling( Normalization is already there )
    4. etc

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Additional context Add any other context or screenshots about the feature request here.

    enhancement 
    opened by rafiqhasan 1
  • Known limitations

    Known limitations

    There are a few limitations in the initial release but we are working day and night to resolve these and add them as future features.

    1. Doesn't support Image / Audio data
    2. Doesn't support - quote delimited CSVs( TFX doesn't support qCSV yet )
    3. Classification only supports integer labels from 0 to N
    enhancement 
    opened by rafiqhasan 1
  • When AutoTF will be released for Time Series ?

    When AutoTF will be released for Time Series ?

    Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

    Describe the solution you'd like A clear and concise description of what you want to happen.

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Additional context Add any other context or screenshots about the feature request here.

    enhancement 
    opened by gulabpatel 1
Releases(1.3.4)
  • 1.3.4(Dec 9, 2022)

    • Fixed bugs
    • Cleaned up PIP dependencies for faster installation

    Full Changelog: https://github.com/rafiqhasan/auto-tensorflow/compare/1.3.3...1.3.4

    Source code(tar.gz)
    Source code(zip)
  • 1.3.3(Dec 9, 2022)

  • 1.3.2(Nov 26, 2021)

    • Added bucketization feature engineering
    • Added more diverse HPT options
    • Replaced RELU with SELU
    • Better accuracy on regression models
    • Changed HPT objective for classification models
    • Multiple improvisations for higher accuracy models

    Full Changelog: https://github.com/rafiqhasan/auto-tensorflow/compare/1.3.1...1.3.2

    Source code(tar.gz)
    Source code(zip)
  • 1.3.1(Nov 18, 2021)

    Features:

    1. Upgraded to TF 2.6.0
    2. Upgraded to TFX 1.4.0
    3. Added new feature engineering functions
    4. Added capability to handle multiple line CSVs
    5. Keras Tuner functionality now more optimised and HPT runs faster
    Source code(tar.gz)
    Source code(zip)
  • 1.2.0(Jul 24, 2021)

    1.2.0 - 07/24/2021

    • Upgraded to TFX 1.0.0
    • Major performance fixes
    • Fixed bugs
    • Added more features:
      • TFX CSVExampleGen speedup
      • Added more feature engineering options
    Source code(tar.gz)
    Source code(zip)
  • 1.1.1(Jul 20, 2021)

    1.1.1 - 07/14/2021

    • Fixed bugs
    • Added more features:
      • Added complexity = 2 for automated tunable textual layers
      • Textual label for Classification
      • Imbalanced label handling
      • GPU fixes
    Source code(tar.gz)
    Source code(zip)
  • 1.0.1(Jul 20, 2021)

Owner
Hasan Rafiq
Technology enthusiast working @ Google: Google Cloud, Machine Learning, Tensorflow, Python
Hasan Rafiq
"SOLQ: Segmenting Objects by Learning Queries", SOLQ is an end-to-end instance segmentation framework with Transformer.

SOLQ: Segmenting Objects by Learning Queries This repository is an official implementation of the paper SOLQ: Segmenting Objects by Learning Queries.

MEGVII Research 179 Jan 02, 2023
Creating predictive checklists from data using integer programming.

Learning Optimal Predictive Checklists A Python package to learn simple predictive checklists from data subject to customizable constraints. For more

Healthy ML 5 Apr 19, 2022
Solving Zero-Shot Learning in Named Entity Recognition with Common Sense Knowledge

Zero-Shot Learning in Named Entity Recognition with Common Sense Knowledge Associated code for the paper Zero-Shot Learning in Named Entity Recognitio

Søren Hougaard Mulvad 13 Dec 25, 2022
competitions-v2

Codabench (formerly Codalab Competitions v2) Installation $ cp .env_sample .env $ docker-compose up -d $ docker-compose exec django ./manage.py migrat

CodaLab 21 Dec 02, 2022
Implementation of "Fast and Flexible Temporal Point Processes with Triangular Maps" (Oral @ NeurIPS 2020)

Fast and Flexible Temporal Point Processes with Triangular Maps This repository includes a reference implementation of the algorithms described in "Fa

Oleksandr Shchur 20 Dec 02, 2022
Taming Transformers for High-Resolution Image Synthesis

Taming Transformers for High-Resolution Image Synthesis CVPR 2021 (Oral) Taming Transformers for High-Resolution Image Synthesis Patrick Esser*, Robin

CompVis Heidelberg 3.5k Jan 03, 2023
[NeurIPS 2021] Source code for the paper "Qu-ANTI-zation: Exploiting Neural Network Quantization for Achieving Adversarial Outcomes"

Qu-ANTI-zation This repository contains the code for reproducing the results of our paper: Qu-ANTI-zation: Exploiting Quantization Artifacts for Achie

Secure AI Systems Lab 8 Mar 26, 2022
A transformer-based method for Healthcare Image Captioning in Vietnamese

vieCap4H Challenge 2021: A transformer-based method for Healthcare Image Captioning in Vietnamese This repo GitHub contains our solution for vieCap4H

Doanh B C 4 May 05, 2022
Language-Agnostic Website Embedding and Classification

Homepage2Vec Language-Agnostic Website Embedding and Classification based on Curlie labels https://arxiv.org/pdf/2201.03677.pdf Homepage2Vec is a pre-

25 Dec 27, 2022
Evidential Softmax for Sparse Multimodal Distributions in Deep Generative Models

Evidential Softmax for Sparse Multimodal Distributions in Deep Generative Models Abstract Many applications of generative models rely on the marginali

Stanford Intelligent Systems Laboratory 9 Jun 06, 2022
TextBPN Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection

TextBPN Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection; Accepted by ICCV2021. Note: The complete code (including training and t

S.X.Zhang 84 Dec 13, 2022
Code release for NeurIPS 2020 paper "Co-Tuning for Transfer Learning"

CoTuning Official implementation for NeurIPS 2020 paper Co-Tuning for Transfer Learning. [News] 2021/01/13 The COCO 70 dataset used in the paper is av

THUML @ Tsinghua University 35 Sep 23, 2022
NALSM: Neuron-Astrocyte Liquid State Machine

NALSM: Neuron-Astrocyte Liquid State Machine This package is a Tensorflow implementation of the Neuron-Astrocyte Liquid State Machine (NALSM) that int

Computational Brain Lab 4 Nov 28, 2022
Implementation of GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation (ICLR 2022).

GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation [OpenReview] [arXiv] [Code] The official implementation of GeoDiff: A Geome

Minkai Xu 155 Dec 26, 2022
Fang Zhonghao 13 Nov 19, 2022
Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding

🍐 quince Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding 🍐 Installation $ git clone

Andrew Jesson 19 Jun 23, 2022
Implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hashing by Maximizing Bit Entropy

Deep Unsupervised Image Hashing by Maximizing Bit Entropy This is the PyTorch implementation of accepted AAAI 2021 paper: Deep Unsupervised Image Hash

62 Dec 30, 2022
JstDoS - HTTP Protocol Stack Remote Code Execution Vulnerability

jstDoS If you are going to skid that, please give credits ! ^^ ¿How works? This

apolo 4 Feb 11, 2022
Video lie detector using xgboost - A video lie detector using OpenFace and xgboost

video_lie_detector_using_xgboost a video lie detector using OpenFace and xgboost

2 Jan 11, 2022