Prososdy Morph: A python library for manipulating pitch and duration in an algorithmic way, for resynthesizing speech.

Related tags

Deep LearningProMo
Overview

ProMo (Prosody Morph)

https://travis-ci.org/timmahrt/ProMo.svg?branch=master https://coveralls.io/repos/github/timmahrt/ProMo/badge.svg?branch=master https://img.shields.io/badge/license-MIT-blue.svg?

Questions? Comments? Feedback? Chat with us on gitter!

Join the chat at https://gitter.im/pythonProMo/Lobby

A library for manipulating pitch and duration in an algorithmic way, for resynthesizing speech.

This library can be used to resynthesize pitch in natural speech using pitch contours taken from other speech samples, generated pitch contours, or through algorithmic manipulations of the source pitch contour.

1   Common Use Cases

What can you do with this library?

Apply the pitch or duration from one speech sample to another.

  • alignment happens both in time and in hertz

    • after the morph process, the source pitch points will be at the same absolute pitch and relative time as they are in the target file
    • time is relative to the start and stop time of the interval being considered (e.g. the pitch value at 80% of the duration of the interval). Relative time is used so that the source and target files don't have to be the same length.
    • temporal morphing is a minor effect if the sampling frequency is high but it can be significant when, for example, using a stylized pitch contour with few pitch samples.
  • modifications can be done between entire wav files or between corresponding intervals as specified in a textgrid or other annotation (indicating the boundaries of words, stressed vowels, etc.)

    • the larger the file, the less useful the results are likely to be without using a transcript of some sort
    • the transcripts do not have to match in lexical content, only in the number of intervals (same number of words or phones, etc.)
  • modifications can be scaled (it is possible to generate a wav file with a pitch contour that is 30% or 60% between the source and target contours).

  • can also morph the pitch range and average pitch independently.

  • resynthesis is performed by Praat.

  • pitch can be obtained from praat (such as by using praatio) or from other sources (e.g. ESPS getF0)

  • plots of the resynthesis (such as the ones below) can be generated

2   Illustrative example

Consider the phrase "Mary rolled the barrel". In the first recording (examples/mary1.wav), "Mary rolled the barrel" was said in response to a question such as "Did John roll the barrel?". On the other hand, in the second recording (examples/mary2.wav) the utterance was said in response to a question such as "What happened yesterday".

"Mary" in "mary1.wav" is produced with more emphasis than in "mary2.wav". It is longer and carries a more drammatic pitch excursion. Using ProMo, we can make mary1.wav spoken similar to mary2.wav, even though they were spoken in a different way and by different speakers.

Duration and pitch carry meaning. Change these, and you can change the meaning being conveyed.

Note that modifying pitch and duration too much can introduce artifacts. Such artifacts can be heard even in pitch morphing mary1.wav to mary2.wav.

Pitch morphing (examples/pitch_morph_example.py):

The following image shows morphing of pitch from mary1.wav to mary2.wav on a word-by-word level in increments of 33% (33%, 66%, 100%). Note that the morph adjusts the temporal dimension of the target signal to fit the duration of the source signal (the source and generated contours are equally shorter than the target contour). This occurs at the level of the file unless the user specifies an equal number of segments to align in time (e.g. using word-level transcriptions, as done here, or phone-level transcriptions, etc.)

examples/files/mary1_mary2_f0_morph.png

With the ability to morph pitch range and average pitch, it becomes easier to morph contours produced by different speakers:

The following image shows four different pitch manipulations. On the upper left is the raw morph. Notice that final output (black line) is very close to the target. Differences stem from duration differences.

However, the average pitch and pitch range are qualities of speech that can signify differences in gender in addition to other aspects of speaker identity. By resetting the average pitch and pitch range to that of the source, it is possible to morph the contour while maintaining aspects of the source speaker's identity.

The image in the upper right contains a morph followed by a reset of the average pitch to the source speaker's average pitch. In the bottom right a morph followed by a reset of the speaker's pitch range. In the bottom right pitch range was reset and then the speaker's average pitch was reset.

The longer the speech sample, the more representative the pitch range and mean pitch will be of the speaker. In this example both are skewed higher by the pitch accent on the first word.

Here the average pitch of the source (a female speaker) is much higher than the target (a male speaker) and the resulting morph sounds like it comes from a different speaker than the source or target speakers. The three recordings that involve resetting pitch range and/or average pitch sound much more natural.

examples/files/mary1_mary2_f0_morph_compare.png

Duration morphing (examples/duration_manipulation_example.py):

The following image shows morphing of duration from mary1.wav to mary2.wav on a word-by-word basis in increments of 33% (33%, 66%, 100%). This process can operate over an entire file or, similar to pitch morphing, with annotated segments, as done in this example.

examples/files/mary1_mary2_dur_morph.png

3   Tutorials

Tutorials for learning about prosody manipulation and how to use ProMo are available.

Tutorial 1.1: Intro to ProMo

Tutorial 1.2: Pitch manipulation tutorial

4   Major revisions

Ver 1.3 (May 29, 2017)

  • added tutorials
  • f0Morph() can now exclude certain regions from the morph process if desired

Ver 1.2 (January 27, 2017)

  • added code for reshaping pitch accents (shift alignment, add plateau, or change height)

Ver 1.1 (February 22, 2016)

  • f0 morph code for modifying speaker pitch range and average pitch
  • (October 20, 2016) Added integration tests with travis CI and coveralls support.

Ver 1.0 (January 19, 2016)

  • first public release.

Beta (July 1, 2013)

  • first version which was utilized in my dissertation work

5   Requirements

Python 2.7.* or above

Python 3.3.* or above (or below, probably)

My praatIO library is used extensively and can be downloaded here

Matplotlib is needed if you want to plot graphs. Matplotlib website

Scipy is needed if you want to use interpolation--typically if you have stylized pitch contours (in praat PitchTier format, for example) that you want to use in your morphing). Scipy website

Matplotlib and SciPy are non-trivial to install, as they depends on several large packages. You can visit their websites for more information. I recommend the following instructions to install matplotlib which uses python wheels. These will install all required libraries in one fell swoop.

On Mac, open a terminal and type:

python -m pip install matplotlib

python -m pip install scipy

On Windows, open a cmd or powershell window and type:

<<path to python>> -m pip install matplotlib

<<path to python>> -m pip install scipy

e.g. C:\python27\python.exe -m install matplotlib

Otherwise, to manually install, after downloading the source from github, from a command-line shell, navigate to the directory containing setup.py and type:

python setup.py install

If python is not in your path, you'll need to enter the full path e.g.:

C:\Python27\python.exe setup.py install

6   Usage

See /examples for example usages

7   Installation

If you on Windows, you can use the installer found here (check that it is up to date though) Windows installer

Promo is on pypi and can be installed or upgraded from the command-line shell with pip like so:

python -m pip install promo --upgrade

Otherwise, to manually install, after downloading the source from github, from a command-line shell, navigate to the directory containing setup.py and type:

python setup.py install

If python is not in your path, you'll need to enter the full path e.g.:

C:\Python36\python.exe setup.py install

8   Citing ProMo

If you use ProMo in your research, please cite it like so:

Tim Mahrt. ProMo: The Prosody-Morphing Library. https://github.com/timmahrt/ProMo, 2016.

9   Acknowledgements

Development of ProMo was possible thanks to NSF grant BCS 12-51343 to Jennifer Cole, José I. Hualde, and Caroline Smith and to the A*MIDEX project (n° ANR-11-IDEX-0001-02) to James Sneed German funded by the Investissements d'Avenir French Government program, managed by the French National Research Agency (ANR).

Owner
Tim
I write tools for working with speech data.
Tim
This repository contains the code for the ICCV 2019 paper "Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics"

Occupancy Flow This repository contains the code for the project Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics. You can find detail

189 Dec 29, 2022
QAT(quantize aware training) for classification with MQBench

MQBench Quantization Aware Training with PyTorch I am using MQBench(Model Quantization Benchmark)(http://mqbench.tech/) to quantize the model for depl

Ling Zhang 29 Nov 18, 2022
Pytoydl: A toy deep learning framework built upon numpy.

Documents: https://pytoydl.readthedocs.io/zh/latest/ Pytoydl A toy deep learning framework built upon numpy. You can star this repository to keep trac

28 Dec 10, 2022
PyTorch-centric library for evaluating and enhancing the robustness of AI technologies

Responsible AI Toolbox A library that provides high-quality, PyTorch-centric tools for evaluating and enhancing both the robustness and the explainabi

24 Dec 22, 2022
PyTorch module to use OpenFace's nn4.small2.v1.t7 model

OpenFace for Pytorch Disclaimer: This codes require the input face-images that are aligned and cropped in the same way of the original OpenFace. * I m

Pete Tae-hoon Kim 176 Dec 12, 2022
torchbearer: A model fitting library for PyTorch

Note: We're moving to PyTorch Lightning! Read about the move here. From the end of February, torchbearer will no longer be actively maintained. We'll

631 Jan 04, 2023
Utility code for use with PyXLL

pyxll-utils There is no need to use this package as of PyXLL 5. All features from this package are now provided by PyXLL. If you were using this packa

PyXLL 10 Dec 18, 2021
StyleGAN-Human: A Data-Centric Odyssey of Human Generation

StyleGAN-Human: A Data-Centric Odyssey of Human Generation Abstract: Unconditional human image generation is an important task in vision and graphics,

stylegan-human 762 Jan 08, 2023
PyTorch wrappers for using your model in audacity!

audacitorch This package contains utilities for prepping PyTorch audio models for use in Audacity. More specifically, it provides abstract classes for

Hugo Flores García 130 Dec 14, 2022
Training deep models using anime, illustration images.

animeface deep models for anime images. Datasets anime-face-dataset Anime faces collected from Getchu.com. Based on Mckinsey666's dataset. 63.6K image

Tomoya Sawada 61 Dec 25, 2022
Train neural network for semantic segmentation (deep lab V3) with pytorch in less then 50 lines of code

Train neural network for semantic segmentation (deep lab V3) with pytorch in 50 lines of code Train net semantic segmentation net using Trans10K datas

17 Dec 19, 2022
GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data

GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data By Shuchang Zhou, Taihong Xiao, Yi Yang, Dieqiao Feng, Qinyao He, W

Taihong Xiao 141 Apr 16, 2021
The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"

Hierarchical Token Semantic Audio Transformer Introduction The Code Repository for "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound

Knut(Ke) Chen 134 Jan 01, 2023
Code and results accompanying our paper titled Mixture Proportion Estimation and PU Learning: A Modern Approach at Neurips 2021 (Spotlight)

Mixture Proportion Estimation and PU Learning: A Modern Approach This repository is the official implementation of Mixture Proportion Estimation and P

Approximately Correct Machine Intelligence (ACMI) Lab 23 Dec 28, 2022
Python implementation of the multistate Bennett acceptance ratio (MBAR)

pymbar Python implementation of the multistate Bennett acceptance ratio (MBAR) method for estimating expectations and free energy differences from equ

Chodera lab // Memorial Sloan Kettering Cancer Center 169 Dec 02, 2022
Fast and customizable reconnaissance workflow tool based on simple YAML based DSL.

Fast and customizable reconnaissance workflow tool based on simple YAML based DSL, with support of notifications and distributed workload of that work

Américo Júnior 3 Mar 11, 2022
TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks

TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks [Paper] [Project Website] This repository holds the source code, pretra

Humam Alwassel 83 Dec 21, 2022
Code release for "COTR: Correspondence Transformer for Matching Across Images"

COTR: Correspondence Transformer for Matching Across Images This repository contains the inference code for COTR. We plan to release the training code

UBC Computer Vision Group 360 Jan 06, 2023
Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral

Temporally Efficient Vision Transformer for Video Instance Segmentation Temporally Efficient Vision Transformer for Video Instance Segmentation (CVPR

Hust Visual Learning Team 203 Dec 31, 2022
PyTorch code for our paper "Gated Multiple Feedback Network for Image Super-Resolution" (BMVC2019)

Gated Multiple Feedback Network for Image Super-Resolution This repository contains the PyTorch implementation for the proposed GMFN [arXiv]. The fram

Qilei Li 66 Nov 03, 2022