PRAnCER is a web platform that enables the rapid annotation of medical terms within clinical notes.

Overview

PRAnCER

PRAnCER (Platform enabling Rapid Annotation for Clinical Entity Recognition) is a web platform that enables the rapid annotation of medical terms within clinical notes. A user can highlight spans of text and quickly map them to concepts in large vocabularies within a single, intuitive platform. Users can use the search and recommendation features to find labels without ever needing to leave the interface. Further, the platform can take in output from existing clinical concept extraction systems as pre-annotations, which users can accept or modify in a single click. These features allow users to focus their time and energy on harder examples instead.

Usage

Installation Instructions

Detailed installation instructions are provided below; PRAnCER can operate on Mac, Windows, and Linux machines.

Linking to UMLS Vocabulary

Use of the platform requires a UMLS license, as it requires several UMLS-derived files to surface recommendations. Please email magrawal (at) mit (dot) edu to request these files, along with your API key so we may confirm. You can sign up here. Surfacing additional information in the UI also requires you enter your UMLS API key in application/utils/constants.py.

Loading in and Exporting Data

To load in data, users directly place any clinical notes as .txt files in the /data folder; an example file is provided. The output of annotation is .json file in the /data folder with the same file prefix as the .txt. To start annotating a note from scratch, a user can just delete the corresponding .json file.

Pre-filled Suggestions

Two options exist for pre-filled suggestions; users specify which they want to use in application/utils/constants.py. The default is "MAP". Option 1 for pre-filled suggestions is "MAP", if users want to preload annotations based on a dictionary of high-precision text to CUI for their domain, e.g. {hypertension: "C0020538"}. A pre-created dictionary will be provided alongside the UMLS files described above. Option 2 for pre-filled suggestions is "CSV", if users want to load in pre-computed pre-annotations (e.g. from their own algorithm, scispacy, cTAKES, MetaMap). Users simply place a CSV of spans and CUIs, with the same prefix as the data .txt file, and our scripts will automatically incorporate those annotations. example.csv in the /data file provides an example.

Installation

The platform requires python3.7, node.js, and several other python and javascript packages. Specific installation instructions for each follow!

Backend requirements

1) First check if python3 is installed.

You can check to see if it is installed:

$ python3 --version

If it is installed, you should see Python 3.7.x

If you need to install it, you can easily do that with a package manager like Homebrew:

$ brew install python3

2) With python3 installed, install necessary python packages.

You can install packages with the python package installer pip:

$ pip3 install flask flask_script flask_migrate flask_bcrypt nltk editdistance requests lxml

Frontend requirements

3) Check to see if npm and node.js are installed:

$ npm -v
$ node -v

If they are, you can skip to Step 4. If not, to install node, first install nvm:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.1/install.sh | bash

Source: https://github.com/nvm-sh/nvm

Re-start your terminal and confirm nvm installation with:

command -v nvm

Which will return nvm if successful

Then install node version 10.15.1:

$ nvm install 10.15.1

4) Install the node dependencies:

$ cd static
$ npm install --save

For remote server applications, permissions errors may be triggered.
If so, try adding --user to install commands.

Run program

Run the backend

Open one terminal tab to run the backend server:

$ python3 manage.py runserver

If all goes well, you should see * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) followed by a few more lines in the terminal.

Run the frontend

Open a second terminal tab to run the frontend:

$ cd static
$ npm start

After this, open your browser to http://localhost:3000 and you should see the homepage!

Contact

If you have any questions, please email Monica Agrawal [[email protected]]. Credit belongs to Ariel Levy for the development of this platform.

Based on React-Redux-Flask boilerplate.

Owner
Sontag Lab
Machine learning algorithms and applications to health care.
Sontag Lab
Addon for adding subtitle files to blender VSE as Text sequences. Using pysub2 python module.

Import Subtitles for Blender VSE Addon for adding subtitle files to blender VSE as Text sequences. Using pysub2 python module. Supported formats by py

4 Feb 27, 2022
To classify the News into Real/Fake using Features from the Text Content of the article

Hoax-Detector Authenticity of news has now become a major problem. The Idea is to classify the News into Real/Fake using Features from the Text Conten

Aravindhan 1 Feb 09, 2022
Spam filtering made easy for you

spammy Author: Tasdik Rahman Latest version: 1.0.3 Contents 1 Overview 2 Features 3 Example 3.1 Accuracy of the classifier 4 Installation 4.1 Upgradin

Tasdik Rahman 137 Dec 18, 2022
This repository contains data used in the NAACL 2021 Paper - Proteno: Text Normalization with Limited Data for Fast Deployment in Text to Speech Systems

Proteno This is the data release associated with the corresponding NAACL 2021 Paper - Proteno: Text Normalization with Limited Data for Fast Deploymen

37 Dec 04, 2022
Pytorch implementation of winner from VQA Chllange Workshop in CVPR'17

2017 VQA Challenge Winner (CVPR'17 Workshop) pytorch implementation of Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challeng

Mark Dong 166 Dec 11, 2022
A Multi-modal Model Chinese Spell Checker Released on ACL2021.

ReaLiSe ReaLiSe is a multi-modal Chinese spell checking model. This the office code for the paper Read, Listen, and See: Leveraging Multimodal Informa

DaDa 106 Dec 29, 2022
Simple translation demo showcasing our headliner package.

Headliner Demo This is a demo showcasing our Headliner package. In particular, we trained a simple seq2seq model on an English-German dataset. We didn

Axel Springer News Media & Tech GmbH & Co. KG - Ideas Engineering 16 Nov 24, 2022
πŸ† β€’ 5050 most frequent words in 109 languages

πŸ† Most Common Words Multilingual 5000 most frequent words in 109 languages. Uses wordfrequency.info as a source. πŸ”— License source code license data

14 Nov 24, 2022
Smart discord chatbot integrated with Dialogflow to manage different classrooms and assist in teaching!

smart-school-chatbot Smart discord chatbot integrated with Dialogflow to interact with students naturally and manage different classes in a school. De

Tom Huynh 5 Oct 24, 2022
Transformer - A TensorFlow Implementation of the Transformer: Attention Is All You Need

[UPDATED] A TensorFlow Implementation of Attention Is All You Need When I opened this repository in 2017, there was no official code yet. I tried to i

Kyubyong Park 3.8k Dec 26, 2022
A linter to manage all your python exceptions and try/except blocks (limited only for those who like dinosaurs).

Manage your exceptions in Python like a PRO Currently in BETA. Inspired by this blog post. I shared the building process of this tool here. β€œFor those

Guilherme Latrova 353 Dec 31, 2022
Index different CKAN entities in Solr, not just datasets

ckanext-sitesearch Index different CKAN entities in Solr, not just datasets Requirements This extension requires CKAN 2.9 or higher and Python 3 Featu

Open Knowledge Foundation 3 Dec 02, 2022
A simple Streamlit App to classify swahili news into different categories.

Swahili News Classifier Streamlit App A simple app to classify swahili news into different categories. Installation Install all streamlit requirements

Davis David 4 May 01, 2022
Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)

TOPSIS implementation in Python Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) CHING-LAI Hwang and Yoon introduced TOPSIS

Hamed Baziyad 8 Dec 10, 2022
Train πŸ€—-transformers model with Poutyne.

poutyne-transformers Train πŸ€— -transformers models with Poutyne. Installation pip install poutyne-transformers Example import torch from transformers

Lennart Keller 2 Dec 18, 2022
Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch

N-Grammer - Pytorch Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch Install $ pip install n-grammer-pytorch Usage

Phil Wang 66 Dec 29, 2022
PyTorch implementation of "data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language" from Meta AI

data2vec-pytorch PyTorch implementation of "data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language" from Meta AI (F

Aryan Shekarlaban 105 Jan 04, 2023
MicBot - MicBot uses Google Translate to speak everyone's chat messages

MicBot MicBot uses Google Translate to speak everyone's chat messages. It can al

2 Mar 09, 2022
A multi-lingual approach to AllenNLP CoReference Resolution along with a wrapper for spaCy.

Crosslingual Coreference Coreference is amazing but the data required for training a model is very scarce. In our case, the available training for non

Pandora Intelligence 71 Jan 04, 2023
Unofficial Parallel WaveGAN (+ MelGAN & Multi-band MelGAN & HiFi-GAN & StyleMelGAN) with Pytorch

Parallel WaveGAN implementation with Pytorch This repository provides UNOFFICIAL pytorch implementations of the following models: Parallel WaveGAN Mel

Tomoki Hayashi 1.2k Dec 23, 2022