A tool helps build a talk preview image by combining the given background image and talk event description

Overview

talk-preview-img-builder

A tool helps build a talk preview image by combining the given background image and talk event description

Installation and Usage

Install Dependencies

For running the app, you need to install the following dependencies by following command:

pipenv install -d

Run the Application

Before running the application, you need to prepare the material for building the talk preview images/slides. There are two materials that are required:

  • A background image named background.png which is located in the materials/img folder.

  • A talk event description named speeches.json which is located in the materials/ folder.

After preparing the material, you can run the application by following command:

pipenv run build_talk_preview_img   # build the talk preview images

or

pipenv run build_talk_preview_ppt  # build the talk preview slides

The generated talk preview images and slides are located in the export/ folder.

Configuring the Application

There are several options to configure the application, the default values are shown in the config.py file. You can override the default values by editing the config.py file or adding a .env file that setting theses variables before running the app.

Variable Description Default Value (Setting for Image/ Setting for Slides) Type (Setting for Image/ Setting for Slides)
BACKGROUND_IMG_PATH The path to the background image materials/img/background.png String
SPEECHES_PATH The path to the speech description materials/speeches.json String
PREVIEW_IMG_WIDTH The width of the generated preview image 700px / 30cm Integer / Float
PREVIEW_IMG_HEIGHT The height of the generated preview image 700px / 30cm Integer / Float
PREVIEW_IMG_TITLE_UPPER_LEFT_X The left position of the title in the upper left corner of the generated preview image 110px / 0.95cm Integer / Float
PREVIEW_IMG_TITLE_UPPER_LEFT_Y The top position of the title in the upper left corner of the generated preview image 110px / 1.04cm Integer / Float
PREVIEW_IMG_CONTENT_UPPER_LEFT_X The left position of the content in the upper left corner of the generated preview image 85px / 1.38cm Integer / Float
PREVIEW_IMG_CONTENT_UPPER_LEFT_Y The top position of the content in the upper left corner of the generated preview image 200px / 3.8cm Integer / Float
PREVIEW_IMG_FOOTER_UPPER_LEFT_X The left position of the footer in the upper left corner of the generated preview image 100px / 1.6cm Integer / Float
PREVIEW_IMG_FOOTER_UPPER_LEFT_Y The top position of the footer in the upper left corner of the generated preview image 650px / 12.2cm Integer / Float
PREVIEW_IMG_SPEAKER_UPPER_RIGHT_X The right position of the speaker name in the upper right corner of the generated preview image 600px / 13.5cm Integer / Float
PREVIEW_IMG_SPEAKER_UPPER_RIGHT_Y The top position of the speaker name in the upper right corner of the generated preview image 570px / 10cm Integer / Float
TITLE_HEIGHT The height of the title 70px / 1.84cm Integer / Float
CONTENT_HEIGHT The height of the content 90px / 7.5cm Integer / Float
PREVIEW_TEXT_COLOR The color of text used in the preview image #080A42 String
PREVIEW_HIGHTLIGHT_TEXT_COLOR The highlight color of text used in the preview image #EBCC73 String
PREVIEW_TEXT_FONT The font used in the preview image "PingFang.ttc"/"Taipei Sans TC Beta" String
PREVIEW_TEXT_BOLD_FONT The bold font used in the preview image "PingFang.ttc"/"Taipei Sans TC Beta" String

Coding Style

The coding style of the application is PEP8. You can use the following command to check the coding style:

pipenv run lint

and the following command to reformat the coding style which is leveraged by black and isort:

pipenv run reformat

TODO

  • Automatically generate the talk preview metadata file (e.g. speeches.json) from the PyConTW API server.
  • Implement hybrid language support text wrapping in title and content of the talk preview image.
  • Implement dynamic font size adjustment in the title and content of the talk preview image depending on the length of words.
  • Implement CI workflow by using GitHub Actions
Owner
PyCon Taiwan
PyCon Taiwan
STT for TorchScript is a port of Coqui STT based on DeepSpeech to PyTorch.

st3 STT for TorchScript is a port of Coqui STT based on DeepSpeech to PyTorch. Currently it supports converting pbmm models to pt scripts with integra

Vlad Ki 8 Oct 18, 2021
The first online catalogue for Arabic NLP datasets.

Masader The first online catalogue for Arabic NLP datasets. This catalogue contains 200 datasets with more than 25 metadata annotations for each datas

ARBML 94 Dec 26, 2022
Simple, hackable offline speech to text - using the VOSK-API.

Simple, hackable offline speech to text - using the VOSK-API.

Campbell Barton 844 Jan 07, 2023
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Intel Labs 2.9k Dec 31, 2022
NLP: SLU tagging

NLP: SLU tagging

北海若 3 Jan 14, 2022
Code for CodeT5: a new code-aware pre-trained encoder-decoder model.

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation This is the official PyTorch implementation

Salesforce 564 Jan 08, 2023
Unsupervised text tokenizer focused on computational efficiency

YouTokenToMe YouTokenToMe is an unsupervised text tokenizer focused on computational efficiency. It currently implements fast Byte Pair Encoding (BPE)

VK.com 847 Dec 19, 2022
Use PaddlePaddle to reproduce the paper:mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer

MT5_paddle Use PaddlePaddle to reproduce the paper:mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer English | 简体中文 mT5: A Massively

2 Oct 17, 2021
Creating a Feed of MISP Events from ThreatFox (by abuse.ch)

ThreatFox2Misp Creating a Feed of MISP Events from ThreatFox (by abuse.ch) What will it do? This will fetch IOCs from ThreatFox by Abuse.ch, convert t

17 Nov 22, 2022
SASE : Self-Adaptive noise distribution network for Speech Enhancement with heterogeneous data of Cross-Silo Federated learning

SASE : Self-Adaptive noise distribution network for Speech Enhancement with heterogeneous data of Cross-Silo Federated learning We propose a SASE mode

Tower 1 Nov 20, 2021
Natural Language Processing

NLP Natural Language Processing apps Multilingual_NLP.py start #This script is demonstartion of Mul

Ritesh Sharma 1 Oct 31, 2021
This repository has a implementations of data augmentation for NLP for Japanese.

daaja This repository has a implementations of data augmentation for NLP for Japanese: EDA: Easy Data Augmentation Techniques for Boosting Performance

Koga Kobayashi 60 Nov 11, 2022
MEDIALpy: MEDIcal Abbreviations Lookup in Python

A small python package that allows the user to look up common medical abbreviations.

Aberystwyth Systems Biology 7 Nov 09, 2022
Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer

ConSERT Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer Requirements torch==1.6.0

Yan Yuanmeng 478 Dec 25, 2022
NLP - Machine learning

Flipkart-product-reviews NLP - Machine learning About Product reviews is an essential part of an online store like Flipkart’s branding and marketing.

Harshith VH 1 Oct 29, 2021
An open collection of annotated voices in Japanese language

声庭 (Koniwa): オープンな日本語音声とアノテーションのコレクション Koniwa (声庭): An open collection of annotated voices in Japanese language 概要 Koniwa(声庭)は利用・修正・再配布が自由でオープンな音声とアノテ

Koniwa project 32 Dec 14, 2022
[EMNLP 2021] LM-Critic: Language Models for Unsupervised Grammatical Error Correction

LM-Critic: Language Models for Unsupervised Grammatical Error Correction This repo provides the source code & data of our paper: LM-Critic: Language M

Michihiro Yasunaga 98 Nov 24, 2022
Code Implementation of "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction".

Span-ASTE: Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction ***** New March 31th, 2022: Scikit-Style API for Easy Usage *****

Chia Yew Ken 111 Dec 23, 2022
🤗 The largest hub of ready-to-use NLP datasets for ML models with fast, easy-to-use and efficient data manipulation tools

🤗 The largest hub of ready-to-use NLP datasets for ML models with fast, easy-to-use and efficient data manipulation tools

Hugging Face 15k Jan 02, 2023