HiddenMarkovModel implements hidden Markov models with Gaussian mixtures as distributions on top of TensorFlow

Overview

Class HiddenMarkovModel

HiddenMarkovModel implements hidden Markov models with Gaussian mixtures as distributions on top of TensorFlow 2.0

Installation

pip install --upgrade git+https://gitlab.com/kesmarag/hmm-gmm-tf2
HiddenMarkovModel(p0, tp, em_w, em_mu, em_var)
Args:
  p0: 1D numpy array
    Determines the probability of the first hidden variable
    in the Markov chain for each hidden state.
    e.g. np.array([0.5, 0.25, 0.25]) (3 hidden states)
  tp: 2D numpy array
    Determines the transition probabilities for moving from one hidden state to each
    other. The (i,j) element of the matrix denotes the probability of
    transiting from i-th state to the j-th state.
    e.g. np.array([[0.80, 0.15, 0.05],
                   [0.20, 0.55, 0.25],
                   [0.15, 0.15, 0.70]])
    (3 hidden states)
  em_w: 2D numpy array
    Contains the weights of the Gaussian mixtures.
    Each line correspond to a hidden state.
    e.g. np.array([[0.8, 0.2],
                   [0.5, 0.5],
                   [0.1, 0.9]])
    (3 hidden states, 2 Gaussian mixtures)
  em_mu: 3D numpy array
    Determines the mean value vector for each component
    of the emission distributions.
    The first dimension refers to the hidden states whereas the
    second one refer to the mixtures.
    e.g. np.array([[[2.2, 1.3], [1.2, 0.2]],    1st hidden state
                   [[1.3, 5.0], [4.3, -2.3]],   2nd hidden state
                   [[0.0, 1.2], [0.4, -2.0]]])  3rd hidden state
    (3 hidden states, 2 Gaussian mixtures)
  em_var: 3D numpy array
    Determines the variance vector for each component of the
    emission distributions.
    e.g. np.array([[[2.2, 1.3], [1.2, 0.2]],    1st hidden state
                    [[1.3, 5.0], [4.3, -2.3]],   2nd hidden state
                    [[0.0, 1.2], [0.4, -2.0]]])  3rd hidden state
    (3 hidden states, 2 Gaussian mixtures)

log_posterior

HiddenMarkovModel.log_posterior(self, data)
Log probability density function.

Args:
  data: 3D numpy array
    The first dimension refers to each component of the batch.
    The second dimension refers to each specific time interval.
    The third dimension refers to the values of the observed data.

Returns:
  1D numpy array with the values of the log-probability function with respect to the observations.

viterbi_algorithm

HiddenMarkovModel.viterbi_algorithm(self, data)
Performs the viterbi algorithm for calculating the most probable
hidden state path of some batch data.

Args:
  data: 3D numpy array
    The first dimension refers to each component of the batch.
    The second dimension refers to each specific time interval.
    The third dimension refers to the values of the observed data.

Returns:
  2D numpy array with the most probable hidden state paths.
    The first dimension refers to each component of the batch.
    The second dimension the order of the hidden states.
    (0, 1, ..., K-1), where K is the total number of hidden states.

fit

HiddenMarkovModel.fit(self, data, max_iter=100, min_var=0.01, verbose=False)
This method re-adapts the model parameters with respect to a batch of
observations, using the Expectation-Maximization (E-M) algorithm.

Args:
  data: 3D numpy array
    The first dimension refers to each component of the batch.
    The second dimension refers to each specific time step.
    The third dimension refers to the values of the observed data.
  max_iter: positive integer number
    The maximum number of iterations.
  min_var: non-negative real value
    The minimum acceptance variance. We use this restriction
    in order to prevent overfitting of the model.

Returns:
  1D numpy array with the log-posterior probability densities for each training iteration.

generate

HiddenMarkovModel.generate(self, length, num_series=1, p=0.2)
Generates a batch of time series using an importance sampling like approach.

Args:
  length: positive integer
    The length of each time series.
  num_series: positive integer (default 1)
    The number of the time series.
  p: real value between 0.0 and 1.0 (default 0.2)
    The importance sampling parameter.
    At each iteration:
  k[A] Draw X and calculate p(X)
      if p(X) > p(X_{q-1}) then
        accept X as X_q
      else
        draw r from [0,1] using the uniform distribution.
        if r > p then
          accept the best of the rejected ones.
        else
          go to [A]

Returns:
  3D numpy array with the drawn time series.
  2D numpy array with the corresponding hidden states.

kl_divergence

HiddenMarkovModel.kl_divergence(self, other, data)
Estimates the value of the Kullback-Leibler divergence (KLD)
between the model and another model with respect to some data.

Example

import numpy as np
from kesmarag.hmm import HiddenMarkovModel, new_left_to_right_hmm, store_hmm, restore_hmm, toy_example
dataset = toy_example()

This helper function creates a test dataset with a single two dimensional time series with 700 samples.

The first 200 samples corresponds to a Gaussian mixture with 

    w1 = 0.6, w2=0.4
    mu1 = [0.5, 1], mu2 = [2, 1]
    var1 = [1, 1], var2=[1.2, 1]

the next 300 corresponds to a Gaussian mixture with

    w1 = 0.6, w2=0.4
    mu1 = [2, 5], mu2 = [4, 5]
    var1 = [0.8, 1], var2=[0.8, 1]

and the last 200 corresponds to a Gaussian mixture with

    w1 = 0.6, w2=0.4
    mu1 = [4, 1], mu2 = [6, 5]
    var1 = [1, 1], var2=[0.8, 1.2]
print(dataset.shape)
(1, 700, 2)
model = new_left_to_right_hmm(states=3, mixtures=2, data=dataset)
model.fit(dataset, verbose=True)
epoch:   0 , ln[p(X|λ)] = -3094.3748904062295
epoch:   1 , ln[p(X|λ)] = -2391.3602228316568
epoch:   2 , ln[p(X|λ)] = -2320.1563724302564
epoch:   3 , ln[p(X|λ)] = -2284.996645965759
epoch:   4 , ln[p(X|λ)] = -2269.0055909790053
epoch:   5 , ln[p(X|λ)] = -2266.1395773469876
epoch:   6 , ln[p(X|λ)] = -2264.4267494952455
epoch:   7 , ln[p(X|λ)] = -2263.156612481979
epoch:   8 , ln[p(X|λ)] = -2262.2725752851293
epoch:   9 , ln[p(X|λ)] = -2261.612564557431
epoch:  10 , ln[p(X|λ)] = -2261.102826808333
epoch:  11 , ln[p(X|λ)] = -2260.7189908960695
epoch:  12 , ln[p(X|λ)] = -2260.437608729253
epoch:  13 , ln[p(X|λ)] = -2260.231860238426
epoch:  14 , ln[p(X|λ)] = -2260.0784163526014
epoch:  15 , ln[p(X|λ)] = -2259.960659542152
epoch:  16 , ln[p(X|λ)] = -2259.8679640963023
epoch:  17 , ln[p(X|λ)] = -2259.793721328861
epoch:  18 , ln[p(X|λ)] = -2259.733658260372
epoch:  19 , ln[p(X|λ)] = -2259.684791553708
epoch:  20 , ln[p(X|λ)] = -2259.6448728507144
epoch:  21 , ln[p(X|λ)] = -2259.6121181368353
epoch:  22 , ln[p(X|λ)] = -2259.5850765029527





[-3094.3748904062295,
 -2391.3602228316568,
 -2320.1563724302564,
 -2284.996645965759,
 -2269.0055909790053,
 -2266.1395773469876,
 -2264.4267494952455,
 -2263.156612481979,
 -2262.2725752851293,
 -2261.612564557431,
 -2261.102826808333,
 -2260.7189908960695,
 -2260.437608729253,
 -2260.231860238426,
 -2260.0784163526014,
 -2259.960659542152,
 -2259.8679640963023,
 -2259.793721328861,
 -2259.733658260372,
 -2259.684791553708,
 -2259.6448728507144,
 -2259.6121181368353,
 -2259.5850765029527]
print(model)
### [kesmarag.hmm.HiddenMarkovModel] ###

=== Prior probabilities ================

[1. 0. 0.]

=== Transition probabilities ===========

[[0.995    0.005    0.      ]
 [0.       0.996666 0.003334]
 [0.       0.       1.      ]]

=== Emission distributions =============

*** Hidden state #1 ***

--- Mixture #1 ---
weight : 0.779990073797613
mean_values : [0.553266 1.155844]
variances : [1.000249 0.967666]

--- Mixture #2 ---
weight : 0.22000992620238702
mean_values : [2.598735 0.633391]
variances : [1.234133 0.916872]

*** Hidden state #2 ***

--- Mixture #1 ---
weight : 0.5188217626642593
mean_values : [2.514082 5.076246]
variances : [1.211327 0.903328]

--- Mixture #2 ---
weight : 0.4811782373357407
mean_values : [3.080913 5.039015]
variances : [1.327171 1.152902]

*** Hidden state #3 ***

--- Mixture #1 ---
weight : 0.5700082256217439
mean_values : [4.03977  1.118112]
variances : [0.97422 1.00621]

--- Mixture #2 ---
weight : 0.429991774378256
mean_values : [6.162698 5.064422]
variances : [0.753987 1.278449]
store_hmm(model, 'test_model.npz')
load_model = restore_hmm('test_model.npz')
gen_data = model.generate(700, 10, 0.05)
0 -2129.992044055025
1 -2316.443344656749
2 -2252.206072731434
3 -2219.667047368621
4 -2206.6760352374367
5 -2190.952289092368
6 -2180.0268345326112
7 -2353.7153702977475
8 -2327.955163192414
9 -2227.4471755146196
print(gen_data)
(array([[[-0.158655,  0.117973],
        [ 4.638243,  0.249049],
        [ 0.160007,  1.079808],
        ...,
        [ 4.671152,  4.18109 ],
        [ 2.121958,  3.747366],
        [ 2.572435,  6.352445]],

       [[-0.158655,  0.117973],
        [-1.379849,  0.998761],
        [-0.209945,  0.947926],
        ...,
        [ 3.93909 ,  1.383347],
        [ 5.356786,  1.57808 ],
        [ 5.0488  ,  5.586755]],

       [[-0.158655,  0.117973],
        [ 1.334   ,  0.979797],
        [ 3.708721,  1.321735],
        ...,
        [ 3.819756,  0.78794 ],
        [ 6.53362 ,  4.177215],
        [ 7.410012,  6.30113 ]],

       ...,

       [[-0.158655,  0.117973],
        [-0.152573,  0.612675],
        [-0.917723, -0.632936],
        ...,
        [ 4.110186, -0.027864],
        [ 2.82694 ,  0.65438 ],
        [ 6.825696,  5.27543 ]],

       [[-0.158655,  0.117973],
        [ 3.141896,  0.560984],
        [ 2.552211, -0.223568],
        ...,
        [ 4.41791 , -0.430231],
        [ 2.525892, -0.64211 ],
        [ 5.52568 ,  6.313566]],

       [[-0.158655,  0.117973],
        [ 0.845694,  2.436781],
        [ 1.564802, -0.652546],
        ...,
        [ 2.33009 ,  0.932121],
        [ 7.095326,  6.339674],
        [ 3.748988,  2.25159 ]]]), array([[0., 0., 0., ..., 1., 1., 1.],
       [0., 0., 0., ..., 2., 2., 2.],
       [0., 0., 0., ..., 2., 2., 2.],
       ...,
       [0., 0., 0., ..., 2., 2., 2.],
       [0., 0., 0., ..., 2., 2., 2.],
       [0., 0., 0., ..., 2., 2., 2.]]))
Owner
Susara Thenuwara
AI + Web Backend Engineer, image processing
Susara Thenuwara
This is the source code of the 1st place solution for segmentation task (with Dice 90.32%) in 2021 CCF BDCI challenge.

1st place solution in CCF BDCI 2021 ULSEG challenge This is the source code of the 1st place solution for ultrasound image angioma segmentation task (

Chenxu Peng 30 Nov 22, 2022
SAS output to EXCEL converter for Cornell/MIT Language and acquisition lab

CORNELLSASLAB SAS output to EXCEL converter for Cornell/MIT Language and acquisition lab Instructions: This python code can be used to convert SAS out

2 Jan 26, 2022
Official codebase used to develop Vision Transformer, MLP-Mixer, LiT and more.

Big Vision This codebase is designed for training large-scale vision models on Cloud TPU VMs. It is based on Jax/Flax libraries, and uses tf.data and

Google Research 701 Jan 03, 2023
The repository offers the official implementation of our BMVC 2021 paper in PyTorch.

CrossMLP Cascaded Cross MLP-Mixer GANs for Cross-View Image Translation Bin Ren1, Hao Tang2, Nicu Sebe1. 1University of Trento, Italy, 2ETH, Switzerla

Bingoren 16 Jul 27, 2022
Crossover Learning for Fast Online Video Instance Segmentation (ICCV 2021)

TL;DR: CrossVIS (Crossover Learning for Fast Online Video Instance Segmentation) proposes a novel crossover learning paradigm to fully leverage rich c

Hust Visual Learning Team 79 Nov 25, 2022
Official Chainer implementation of GP-GAN: Towards Realistic High-Resolution Image Blending (ACMMM 2019, oral)

GP-GAN: Towards Realistic High-Resolution Image Blending (ACMMM 2019, oral) [Project] [Paper] [Demo] [Related Work: A2RL (for Auto Image Cropping)] [C

Wu Huikai 402 Dec 27, 2022
PINN(s): Physics-Informed Neural Network(s) for von Karman vortex street

PINN(s): Physics-Informed Neural Network(s) for von Karman vortex street This is

ShotaDEGUCHI 2 Apr 18, 2022
POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propagation including diffraction

POPPY: Physical Optics Propagation in Python POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propaga

Space Telescope Science Institute 132 Dec 15, 2022
LWCC: A LightWeight Crowd Counting library for Python that includes several pretrained state-of-the-art models.

LWCC: A LightWeight Crowd Counting library for Python LWCC is a lightweight crowd counting framework for Python. It wraps four state-of-the-art models

Matija Teršek 39 Dec 28, 2022
Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration

This repo is for the paper: Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration The DAC environment is based on the Dynam

Carola Doerr 1 Aug 19, 2022
Pytorch Implementation for NeurIPS (oral) paper: Pixel Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation

Pixel-Level Cycle Association This is the Pytorch implementation of our NeurIPS 2020 Oral paper Pixel-Level Cycle Association: A New Perspective for D

87 Oct 19, 2022
My course projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU)

ML2021Spring There are my projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU) Course Web : https://speech.ee.

Ding-Li Chen 15 Aug 29, 2022
A minimal solution to hand motion capture from a single color camera at over 100fps. Easy to use, plug to run.

Minimal Hand A minimal solution to hand motion capture from a single color camera at over 100fps. Easy to use, plug to run. This project provides the

Yuxiao Zhou 824 Jan 07, 2023
[ICLR 2021, Spotlight] Large Scale Image Completion via Co-Modulated Generative Adversarial Networks

Large Scale Image Completion via Co-Modulated Generative Adversarial Networks, ICLR 2021 (Spotlight) Demo | Paper [NEW!] Time to play with our interac

Shengyu Zhao 373 Jan 02, 2023
EssentialMC2 Video Understanding

EssentialMC2 Introduction EssentialMC2 is a complete system to solve video understanding tasks including MHRL(representation learning), MECR2( relatio

Alibaba 106 Dec 11, 2022
Car Price Predictor App used to predict the price of the car based on certain input parameters created using python's scikit-learn, fastapi, numpy and joblib packages.

Pricefy Car Price Predictor App used to predict the price of the car based on certain input parameters created using python's scikit-learn, fastapi, n

Siva Prakash 1 May 10, 2022
Utilities and information for the signals.numer.ai tournament

dsignals Utilities and information for the signals.numer.ai tournament using eodhistoricaldata.com eodhistoricaldata.com provides excellent historical

Degerhan Usluel 23 Dec 18, 2022
Download & Install mods for your favorit game with a few simple clicks

Husko's SteamWorkshop Downloader 🔴 IMPORTANT ❗ 🔴 The Tool is currently being rewritten so updates will be slow and only on the dev branch until it i

Husko 67 Nov 25, 2022
This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the time series forecasting research space.

TSForecasting This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the tim

Rakshitha Godahewa 80 Dec 30, 2022
Repo for flood prediction using LSTMs and HAND

Abstract Every year, floods cause billions of dollars’ worth of damages to life, crops, and property. With a proper early flood warning system in plac

1 Oct 27, 2021