Real-time analysis of intracranial neurophysiology recordings.

Overview

py_neuromodulation

Click this button to run the "Tutorial ML with py_neuro" notebooks:

The py_neuromodulation toolbox allows for real time capable processing of multimodal electrophysiological data. The primary use is movement prediction for adaptive deep brain stimulation.

Find the documentation here https://neuromodulation.github.io/py_neuromodulation/ for example usage and parametrization.

Setup

For running this toolbox first create a new virtual conda environment:

conda env create --file=env.yml --user

The main modules include running real time enabled feature preprocessing based on iEEG BIDS data.

Different features can be enabled/disabled and parametrized in the `https://github.com/neuromodulation/py_neuromodulation/blob/main/pyneuromodulation/nm_settings.json>`_.

The current implementation mainly focuses band power and sharpwave feature estimation.

An example folder with a mock subject and derivate feature set was estimated.

To run feature estimation given the example BIDS data run in root directory.

python main.py

This will write a feature_arr.csv file in the 'examples/data/derivatives' folder.

For further documentation view ParametrizationDefinition for description of necessary parametrization files. FeatureEstimationDemo walks through an example feature estimation and explains sharpwave estimation.

Comments
  • Adopt the usage of an argument parser (argparse)

    Adopt the usage of an argument parser (argparse)

    argparse is a python package that allows for in command line options, arguments and sub-commands

    documentation: https://docs.python.org/3/library/argparse.html

    enhancement 
    opened by mousa-saeed 7
  • Conda environment + setup

    Conda environment + setup

    @timonmerk I tried to install pyneuromodulation as package, but it wasn't so easy to make it work. I did the following:

    • I modified env.yml (previously settting up the environment didn't work - and additionally env.yml was configured to install all packages via pip, now they are (except pybids) installed via conda.
    • I installed the conda environment via

    conda env create -f env.yml

    • I activated the conda environment via

    conda activate pyneuromodulation_test

    • I installed pyneuromodulation in editable mode

    pip install -r requirements.txt --editable .

    • In python, I ran

    import py_neuromodulation print(dir(py_neuromodulation))

    • Which told me that the only items in this package are

    ['builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'path', 'spec'] <

    • Only once I had added the line: "from . import nm_BidsStream" to init.py, did I get the following output for dir():

    ['builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'path', 'spec', 'nm_BidsStream', 'nm_IO', 'nm_bandpower', 'nm_coherence', 'nm_define_nmchannels', 'nm_eval_timing', 'nm_features', 'nm_fft', 'nm_filter', 'nm_generator', 'nm_hjorth_raw', 'nm_kalmanfilter', 'nm_normalization', 'nm_notch_filter', 'nm_plots', 'nm_projection', 'nm_rereference', 'nm_resample', 'nm_run_analysis', 'nm_sharpwaves', 'nm_stft', 'nm_stream', 'nm_test_settings']

    Questions:

    • Was this the intended way of setting up the environment and the package?
    • What could the problem with init.py be related to?
    opened by richardkoehler 4
  • Add Coherence and temporarily remove PDC and DTC

    Add Coherence and temporarily remove PDC and DTC

    @timonmerk We should maybe think about removing PDC and DTC from main again (and moving them to a branch) until we make sure that they are implemented correctly. Instead we could think about adding simple coherence, which might be less specific but easier to implement and less computationally expensive.

    enhancement 
    opened by richardkoehler 4
  • Specifying segment_lengths for frequency_ranges explicitly

    Specifying segment_lengths for frequency_ranges explicitly

    @timonmerk

    Currently the specification of segment_lengths is not very intuitive and also done implicitly, e.g. 10 == 1/10 of the given segment. Maybe we could write the segment_lenghts explicitly in milliseconds, so that they are independent such parameters as length of the data batch that is passed. So whether the data batch is 4096, 8000 or 700, the segment_length would always remain the same (e.g. 100 ms). Instead of this: "frequency_ranges": [[4, 8], [8, 12], [13, 20], [20, 35], [13, 35], [60, 80], [90, 200], [60, 200], [200, 300] ], "segment_lengths": [1, 2, 2, 3, 3, 3, 10, 10, 10, 10],

    I would imagine something like this: "frequency_ranges": [[4, 8], [8, 12], [13, 20], [20, 35], [13, 35], [60, 80], [90, 200], [60, 200], [200, 300] ], "segment_lengths": [1000, 500, 333, 333, 333, 100, 100, 100, 100],

    or even better in my opinion (because it is less prone to errors, the user is enforced to write the segment length write after the frequency range). Just to underline this argument, I noticed that there were too many segment_lengths in the settings.json file. There were 10 segment_lengths but only 9 frequency_ranges. So my suggestion: "frequency_ranges": [[[4, 8], 1000], [[8, 12], 500], [[13, 20], 333], [[20, 35], 333], [[13, 35], 333], [[60, 80], 100], [[90, 200], 100], [[60, 200], 100], [[200, 300] 100] ],

    opened by richardkoehler 4
  • Add cortical-subcortical feature estimation

    Add cortical-subcortical feature estimation

    Given two channels, calculate:

    • partial directed coherence
    • phase amplitude coupling

    Add as separate "feature" file. Needs to implement write to output dataframe

    opened by timonmerk 4
  • Optimize speed of feature normalization

    Optimize speed of feature normalization

    • Computation time of feature normalization was improved
    • Memory usage of feature normalization was improved
    • nm_eval_timing was repaired, as these changes broke the api

    closes #136

    opened by richardkoehler 3
  • Rework

    Rework "nm_settings.json"

    There are some minor issues in the nm_settings.json that we could improve:

    1. Add settings for STFT. Right now STFT requires the sampling frequency to be 1000 Hz
    2. Re-think frequency bands: Either we define them separate from FFT, STFT or bandpass, and then they can't be specifically adapted to each method. Or you can define them inside of each method (FFT, STFT, bandpass separately). But right now, FFT and STFT are "taking" the info from the bandpass settings. Also, right now the fband_names are hard-coded in nm_features.py (not sure why this is the case?)
    3. Make notch filter settings more flexible (i.e. let the user define how many harmonics he would like to remove).
    enhancement 
    opened by richardkoehler 3
  • Add possibility to

    Add possibility to "create" new channels

    Add the option in "settings.json" to "create" new channels by summation (e.g. new_channel: LFP_R_234, sum_channels: [LFP_R_2, LFP_R_3, LFP_R_4]. Not an imminent priority, but would be nice to have.

    opened by richardkoehler 3
  • Add column

    Add column "status" to M1.tsv

    @timonmerk I suggest that we add the column "status" (like in BIDS channels.tsv files) to the M1.tsv file values would be "good" or "bad" I recently encountered the problem that I wanted to exclude an ECOG channel from the ECOG average reference because it was too noisy, but there is no way to do this, except to hard-code. Alternatively, we could in theory also implement excluding channels by setting the type of the bad channel to "bad". This is not 100% logical but this would avoid for us to change the current API. However, the more sustainable and "BIDS-friendly" solution would be to implement the "status" column. I could easily imagine such a scenario:

    • We want to make real-time predictions from an ECOG grid (e.g. 6 * 5 electrodes), but to reduce computational cost we only want to select the best performing ECOG channel. However, we would like to re-reference this ECOG channel to an average ECOG reference. So the solution would be to mark all ECOG channels as type "ecog", mark the bad ECOG channels as "bad", the others as "good", and only set "used" to 1 for the best performing ECOG channel.
    opened by richardkoehler 3
  • Check in settings.py sampling frequency filter relation

    Check in settings.py sampling frequency filter relation

    For too low sampling frequencies, the current notch filter will fail. Design filter in settings.py?

    Settings.py should be written as class s.t. settings.json, df_M1 and such code isn't part of start_BIDS / start_LSL anymore.

        fs = int(np.ceil(raw_arr.info["sfreq"]))
        line_noise = int(raw_arr.info["line_freq"])
    
        # read df_M1 / create M1 if None specified
        df_M1 = pd.read_csv(PATH_M1, sep="\t") \
            if PATH_M1 is not None and os.path.isfile(PATH_M1) \
            else define_M1.set_M1(raw_arr.ch_names, raw_arr.get_channel_types())
        ch_names = list(df_M1['name'])
        refs = df_M1['rereference']
        to_ref_idx = np.array(df_M1[(df_M1['used'] == 1)].index)
        cortex_idx = np.where(df_M1.type == 'ecog')[0]
        subcortex_idx = np.array(df_M1[(df_M1["type"] == 'seeg') | (df_M1['type'] == 'dbs')
                                       | (df_M1['type'] == 'lfp')].index)
        ref_here = rereference.RT_rereference(ch_names, refs, to_ref_idx,
                                              cortex_idx, subcortex_idx,
                                              split_data=False)
    
    opened by timonmerk 3
  • Update rereference and test_features

    Update rereference and test_features

    @timonmerk So I was pulling all the information extraction from df_M1, like cortex_idx and subcortex_idx etc., into the rereference module, to make the actual analysis script (e.g. test_features) a bit cleaner. This got me thinking that we should maybe initialize the settings as a class, into which we load the settings.json and M1.tsv. We could then get the relevant information via functions e.g. ch_names(), used_chs(), cortex_idx(), out_path() etc. What do you think? The pull request doesn't implement the class yet or anything, I only pulled the cortex_idx etc. into the rereference initialization.

    opened by richardkoehler 3
  • Restructure nm_run_analysis.py

    Restructure nm_run_analysis.py

    This PR is aimed at restructuring run_analysis.py so that the data processing can now be performed independently of a py_neuromodulation data stream. This means in detail that:

    • I renamed the class Run to DataProcessor to make it clearer what it does: it processes a single batch of data. This is only my personal preference, so feel free to revert this change or give it any other name.
    • It is now DataProcessor and not the Stream that instantiates the Preprocessor and the Features classes from the given settings. This way DataProcessor can be used independently of Stream (e.g. in the case of timeflux, where timeflux handles the stream, hence the name of this branch).
    • I noticed a major bug in nm_run_analysis.py, where the settings specified "raw_resampling", but nm_run_analysis checked for "raw_resample" and this went unnoticed, because the matching statement didn't check for invalid specifications. This means that in example_BIDS.py "raw_resampling" was activated, but the data was not actually resampled. It took me a couple of hours to find this bug. I fixed the bug and also added handling the case of invalid strings, but the decoding performance has now slightly changed. Note that in this example case if one deactivates "raw_resampling", one might potentially observe improved performance.
    • After this restructuring, it is no longer necessary to specify the "preprocessing_order" and preprocessing methods to True or False respectively. The keyword "preprocessing" now takes a list (which is equivalent to the preprocessing_order before) and infers from this list which preprocessing methods to use. If you want to deactivate a preprocessing method, simply take it out of the "preprocessing" list. This change I would consider optional, but it does make the code much easier to read and helps us to avoid errors in the settings specifications.
    • I removed the use of the test_settings function, and I think that we should deprecate this function. It was a nice idea originally, but I am now more of the opinion that each method or class (e.g. NotchFilter) must do the work of checking if the passed arguments are valid anyway. So having an additional test_settings functions would mean duplication of this check, and it means that each time we change a function, we have to change test_settings too - so this means duplication of work for ourselves, too.
    • Fixed typos, like LineLenth, and added and fixed some type hints.
    • I might have forgotten about some additional changes I made ...

    Let me know what you think!

    opened by richardkoehler 0
Releases(v0.02)
Owner
Interventional Cognitive Neuromodulation - Neumann Lab Berlin
Interventional and Cognitive Neuromodulation Group
Interventional Cognitive Neuromodulation - Neumann Lab Berlin
Crab is a flexible, fast recommender engine for Python that integrates classic information filtering recommendation algorithms in the world of scientific Python packages (numpy, scipy, matplotlib).

Crab - A Recommendation Engine library for Python Crab is a flexible, fast recommender engine for Python that integrates classic information filtering r

python-recsys 1.2k Dec 21, 2022
Neural Fixed-Point Acceleration for Convex Optimization

Licensing The majority of neural-scs is licensed under the CC BY-NC 4.0 License, however, portions of the project are available under separate license

Facebook Research 27 Oct 06, 2022
Weighted QMIX: Expanding Monotonic Value Function Factorisation

This repo contains the cleaned-up code that was used in "Weighted QMIX: Expanding Monotonic Value Function Factorisation"

whirl 82 Dec 29, 2022
Implementation of Memformer, a Memory-augmented Transformer, in Pytorch

Memformer - Pytorch Implementation of Memformer, a Memory-augmented Transformer, in Pytorch. It includes memory slots, which are updated with attentio

Phil Wang 60 Nov 06, 2022
This repository contains all code and data for the Inside Out Visual Place Recognition task

Inside Out Visual Place Recognition This repository contains code and instructions to reproduce the results for the Inside Out Visual Place Recognitio

15 May 21, 2022
Learning Intents behind Interactions with Knowledge Graph for Recommendation, WWW2021

Learning Intents behind Interactions with Knowledge Graph for Recommendation This is our PyTorch implementation for the paper: Xiang Wang, Tinglin Hua

158 Dec 15, 2022
Official implementation of Monocular Quasi-Dense 3D Object Tracking

Monocular Quasi-Dense 3D Object Tracking Monocular Quasi-Dense 3D Object Tracking (QD-3DT) is an online framework detects and tracks objects in 3D usi

Visual Intelligence and Systems Group 441 Dec 20, 2022
Notebooks, slides and dataset of the CorrelAid Machine Learning Winter School

CorrelAid Machine Learning Winter School Welcome to the CorrelAid ML Winter School! Task The problem we want to solve is to classify trees in Roosevel

CorrelAid 12 Nov 23, 2022
Kaggleship: Kaggle Notebooks

Kaggleship: Kaggle Notebooks This repository contains my Kaggle notebooks. They are generally about data science, machine learning, and deep learning.

Erfan Sobhaei 1 Jan 25, 2022
Human Action Controller - A human action controller running on different platforms.

Human Action Controller (HAC) Goal A human action controller running on different platforms. Fun Easy-to-use Accurate Anywhere Fun Examples Mouse Cont

27 Jul 20, 2022
Implementation of Self-supervised Graph-level Representation Learning with Local and Global Structure (ICML 2021).

Self-supervised Graph-level Representation Learning with Local and Global Structure Introduction This project is an implementation of ``Self-supervise

MilaGraph 50 Dec 09, 2022
Code for technical report "An Improved Baseline for Sentence-level Relation Extraction".

RE_improved_baseline Code for technical report "An Improved Baseline for Sentence-level Relation Extraction". Requirements torch = 1.8.1 transformers

Wenxuan Zhou 74 Nov 29, 2022
Online-compatible Unsupervised Non-resonant Anomaly Detection Repository

Online-compatible Unsupervised Non-resonant Anomaly Detection Repository Repository containing all scripts used in the studies of Online-compatible Un

0 Nov 09, 2021
PyTorch Implementation of Region Similarity Representation Learning (ReSim)

ReSim This repository provides the PyTorch implementation of Region Similarity Representation Learning (ReSim) described in this paper: @Article{xiao2

Tete Xiao 74 Jan 03, 2023
Official code for paper "Demystifying Local Vision Transformer: Sparse Connectivity, Weight Sharing, and Dynamic Weight"

Demysitifing Local Vision Transformer, arxiv This is the official PyTorch implementation of our paper. We simply replace local self attention by (dyna

138 Dec 28, 2022
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022
MediaPipeのPythonパッケージのサンプルです。2020/12/11時点でPython実装のある4機能(Hands、Pose、Face Mesh、Holistic)について用意しています。

mediapipe-python-sample MediaPipeのPythonパッケージのサンプルです。 2020/12/11時点でPython実装のある以下4機能について用意しています。 Hands Pose Face Mesh Holistic Requirement mediapipe 0.

KazuhitoTakahashi 217 Dec 12, 2022
Degree-Quant: Quantization-Aware Training for Graph Neural Networks.

Degree-Quant This repo provides a clean re-implementation of the code associated with the paper Degree-Quant: Quantization-Aware Training for Graph Ne

35 Oct 07, 2022
Council-GAN - Implementation for our paper Breaking the Cycle - Colleagues are all you need (CVPR 2020)

Council-GAN Implementation of our paper Breaking the Cycle - Colleagues are all you need (CVPR 2020) Paper Ori Nizan , Ayellet Tal, Breaking the Cycle

ori nizan 260 Nov 16, 2022