ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab

Overview

AliceMind

AliceMind: ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab

This repository provides pre-trained encoder-decoder models and its related optimization techniques developed by Alibaba's MinD (Machine IntelligeNce of Damo) Lab.

The family of AliceMind:

  • Language understanding model: StructBERT (ICLR 2020)
  • Generative language model: PALM (EMNLP 2020)
  • Cross-lingual language model: VECO (ACL 2021)
  • Cross-modal language model: StructVBERT (CVPR 2020 VQA Challenge Runner-up)
  • Structural language model: StructuralLM (ACL 2021)
  • Chinese language understanding model with multi-granularity inputs: LatticeBERT (NAACL 2021)
  • Pre-training table model: SDCUP (Under Review)

News

  • March, 2021: AliceMind released!
  • May, 2021: VECO and StructuralLM were accepted by ACL 2021.
  • September, 2021: The first Chinese pre-training table model SDCUP released!

Models

  • StructBERT (March 15, 2021): pre-trained models for natural language understanding (NLU). We extend BERT to a new model, StructBERT, by incorporating language structures into pre-training. Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential order of words and sentences, which leverage language structures at the word and sentence levels, respectively. "StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding" (ICLR 2020)

  • PALM (March 15, 2021): pre-trained models for natural language generation (NLG). We propose a novel scheme that jointly pre-trains an autoencoding and autoregressive language model on a large unlabeled corpus, specifically designed for generating new text conditioned on context. It achieves new SOTA results in several downstream tasks. "PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation" (EMNLP 2020)

  • VECO v0 (March 15, 2021): pre-trained models for cross-lingual (x) natural language understanding (x-NLU) and generation (x-NLG). VECO (v0) achieves the new SOTA results on various cross-lingual understanding tasks of the XTREME benchmark, covering text classification, sequence labeling, question answering, and sentence retrieval. For cross-lingual generation tasks, it also outperforms all existing cross-lingual models and state-of-the-art Transformer variants on WMT14 English-to-German and English-to-French translation datasets, with gains of up to 1~2 BLEU. “VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation" (ACL 2021)

  • StructVBERT (March 15, 2021): pre-trained models for vision-language understanding. We propose a new single-stream visual-linguistic pre-training scheme by leveraging multi-stage progressive pre-training and multi-task learning. StructVBERT obtained the 2020 VQA Challenge Runner-up award, and SOTA result on VQA 2020 public Test-standard benchmark (June 2020). "Talk Slides" (CVPR 2020 VQA Challenge Runner-up).

  • StructuralLM (March 15, 2021): pre-trained models for document-image understanding. We propose a new pre-training approach, StructuralLM, to jointly leverage cell and layout information from scanned documents. The pre-trained StructuralLM achieves new state-of-the-art results in different types of downstream tasks. "StructuralLM: Structural Pre-training for Form Understanding" (ACL 2021)

  • LatticeBERT (March 15, 2021): we propose a novel pre-training paradigm for Chinese — Lattice-BERT which explicitly incorporates word representations with those of characters, thus can model a sentence in a multi-granularity manner. "Lattice-BERT: Leveraging Multi-Granularity Representations in Chinese Pre-trained Language Models" (NAACL 2021)

  • SDCUP (September 6, 2021): pre-trained models for table understanding. We design a schema dependency pre-training objective to impose the desired inductive bias into the learned representations for table pre-training. We further propose a schema-aware curriculum learning approach to alleviate the impact of noise and learn effectively from the pre-training data in an easy-to-hard manner. The experiment results on SQUALL and Spider demonstrate the effectiveness of our pre-training objective and curriculum in comparison to a variety of baselines. "SDCUP: Schema Dependency Enhanced Curriculum Pre-Training for Table Semantic Parsing" (Under Review)

Contact Information

AliceMind Official Website: https://nlp.aliyun.com/portal#/alice

AliceMind Open Platform: https://alicemind.aliyuncs.com

Please submit a GitHub issue if you have want help or have issues using ALICE.

For more information, you can join the AliceMind Users Group on DingTalk to contact us. The number of the DingTalk group is 35738533.

For other business communications, please contact [email protected]

License

AliceMind is released under the Apache 2.0 license.

Copyright 1999-2020 Alibaba Group Holding Ltd.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at the following link.

     http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Comments
  • fail to load structbert.en.large while trying to reproduce the result of GLUE

    fail to load structbert.en.large while trying to reproduce the result of GLUE

    Hi, I downloaded the structbert.en.large through the given link (https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/en_model), but the below error occured during running.

    RuntimeError: Error(s) in loading state_dict for BertForSequenceClassificationMultiTask: Missing key(s) in state_dict: "classifier.0.weight", "classifier.0.bias". Unexpected key(s) in state_dict: "lm_bias", "linear.weight", "linear.bias", "LayerNorm.gamma", "LayerNorm.beta", "classifier.weight", "classifier.bias".

    Do you have any idea why this happen? Thank you very much.

    opened by LemX1 10
  • Will you share pre-training code of StructBERT

    Will you share pre-training code of StructBERT

    Hi, I'm trying to code StructBERT from scratch. But I couldn't find any code examples pre-training about StructBERT. In the repository I've found codes for fine-tuning based on various datasets.

    Are you planning to share pre-training model's code for StructBert such as BertForPretraining in Transformers library ?

    Thanks in advance 🙂

    opened by kaansonmezoz 5
  • What’s the CLEVER?

    What’s the CLEVER?

    I found StructBERT + CLEVER in the GLUE benchmark. Is that a technology about the pertaining or fine-tuning? Can you provide more information about CLEVER? Thanks a lot.

    image

    opened by Akeepers 2
  • Hyper parameters for ChildTuning

    Hyper parameters for ChildTuning

    Thanks a lot for all the details you provide in Appendix B for reproducibility! However, I still encounter some difficulties in reproducing the experiment. I noticed that you apply grid search. Could you please provide the specific , and learning rate for each task?

    opened by jiangllan 2
  • SOFA-models

    SOFA-models

    (https://github.com/alibaba/AliceMind/tree/main/SOFA/sofa/models)/init.py / 为什么这里只有三个,roberta没有init呢? 同样README.md里面也没有roberta的相关结果,所以roberta暂时还不能实现是么? 那么如果我需要加入其他模型(如GPT-2)需要如何做呢?

    opened by Wangxh-07 1
  • How to reproduce the result of StructBert on SST-B?

    How to reproduce the result of StructBert on SST-B?

    Hi, I can not reproduce the result reported in the paper by the code example:

    python run_classifier_multi_task.py \
      --task_name STS-B \
      --do_train \
      --do_eval \
      --do_test \
      --lr_decay_factor 1 \
      --dropout 0.1 \
      --do_lower_case \
      --detach_index -1 \
      --core_encoder bert \
      --data_dir data \
      --vocab_file config/vocab.txt \
      --bert_config_file config/large_bert_config.json \
      --init_checkpoint model/en_model \
      --max_seq_length 128 \
      --train_batch_size 32 \
      --learning_rate 2e-5 \
      --num_train_epochs 3 \
      --fast_train \
      --gradient_accumulation_steps 1 \
      --output_dir output \
      --amp_type O1
    

    Are there any hyper-params I set wrong?

    opened by sangyx 1
  • fail to download pretrained model and data

    fail to download pretrained model and data

    Hi, Author

    Thanks for your great contribution.

    But I can't download the pretrained model and data whose link starts with http://119608.oss-cn-hangzhou-zmf.aliyuncs.com

    For example, the links as follows can not be downloaded: http://119608.oss-cn-hangzhou-zmf.aliyuncs.com/structvbert/pretrained_model.tar.gz http://119608.oss-cn-hangzhou-zmf.aliyuncs.com/structvbert/data.tar.gz

    Any advice is appreciated! Thanks.

    opened by cssddnnc9527 1
  • will you consider push your work to huggingface model hub?

    will you consider push your work to huggingface model hub?

    It's a bit suffering to use your model like StructBert.

    There are some minor code modifications compared with huggingface's bert.

    So i won't say it's safe to directly use huggingface's from_pretrained api on your released model checkpoint, while it could be inconvenient to use your modeling code where the BertModel are not inherited with huggingface's PreTrainedModel.

    Any advice?

    opened by tangzhy 1
  • structVbert model

    structVbert model

    Hi, Link for downloading structvbert.en.base model does not work (http://119608.oss-cn-hangzhou-zmf.aliyuncs.com/structvbert/pretrained_model.tar.gz). Could you please fix?

    Thank you

    opened by iukorolev 1
  • Experimental configuration of Child-Tuning

    Experimental configuration of Child-Tuning

    Hi, I want to reproduce the experiment of Child-Tuning, I saw "We report the averaged results over 10 random seeds" In the paper 3.2, could you display the seed sequence? Thank you,looking forward to your reply.

    opened by chaochen99 1
  • dismatch between the given base checkpoint and description in origin paper

    dismatch between the given base checkpoint and description in origin paper

    In the origin paper, PALM-base has 6 encoder layers and 6 decoder layers. However, when I run the given code with the given base checkpoint, it prints 12 encoder layers and 12 decoder layers. Am I wrong?

    opened by 311dada 1
  • visual grounding finetune咨询

    visual grounding finetune咨询

    hello 我有在finetune 模型时,发现模型存在_IncompatibleKeys(主要是fusion_encoder的所有层), 这个是对训练否有影响。 我按照操作流程进行训练,在epoch=10自动停止,并第十轮结果如下: {'train_lr1': 1.9676295349747303e-05, 'train_lr2': 4.931851652578033e-06, 'train_loss_seq': 0.08181070101696078, 'miou': 0.6400532953981678, 'accu': 0.7921358685619346, 'epoch': 10}

    这个结果与论文的accu值存在差距,两个数值是否一致啊?

    opened by xzdong-2019 0
  • 您好!我对mPLUG中的部分代码有些不太理解,希望获得您的帮助。

    您好!我对mPLUG中的部分代码有些不太理解,希望获得您的帮助。

    image 这是图文检索里的代码,我想问一下self.distill是代表什么意思,做蒸馏吗?还有,self._momentum_update()、self._dequeue_and_enqueue(image_feat_m, text_feat_m, idx)这好像动量对比学习里的代码,我也不明白这是什么意思。如果有可能的话,请您尽可能地告诉我这段代码的含义。十分感谢!

    opened by luzhuflower 0
  • Question about the Chinese version of mPLUG

    Question about the Chinese version of mPLUG

    Hi, I notice that there is a Chinese version of mPLUG in modelscope, could you please tell me some details about this model? Such as pretrained dataset, many thanks

    opened by ZihaoZheng98 1
  • Pretrained weights for downstream tasks for mPLUG?

    Pretrained weights for downstream tasks for mPLUG?

    Currently, only the pretrained weights before fine-tuning on downstream tasks for mPLUG are released. Is it possible to release the pretrained weights for downstream tasks after fine-tuning, like visual question answering and image captioning?

    Thanks!

    opened by qiaomu-miao 1
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 1
Releases(v1.0)
Owner
Alibaba
Alibaba Open Source
Alibaba
Yomichad - a Japanese pop-up dictionary that can display readings and English definitions of Japanese words

Yomichad is a Japanese pop-up dictionary that can display readings and English definitions of Japanese words, kanji, and optionally named entities. It is similar to yomichan, 10ten, and rikaikun in s

Jonas Belouadi 7 Nov 07, 2022
Auto translate textbox from Japanese to English or Indonesia

priconne-auto-translate Auto translate textbox from Japanese to English or Indonesia How to use Install python first, Anaconda is recommended Install

Aji Priyo Wibowo 5 Aug 25, 2022
Question answering app is used to answer for a user given question from user given text.

Question answering app is used to answer for a user given question from user given text.It is created using HuggingFace's transformer pipeline and streamlit python packages.

Siva Prakash 3 Apr 05, 2022
Code repository for "It's About Time: Analog clock Reading in the Wild"

it's about time Code repository for "It's About Time: Analog clock Reading in the Wild" Packages required: pytorch (used 1.9, any reasonable version s

52 Nov 10, 2022
Healthsea is a spaCy pipeline for analyzing user reviews of supplementary products for their effects on health.

Welcome to Healthsea ✨ Create better access to health with spaCy. Healthsea is a pipeline for analyzing user reviews to supplement products by extract

Explosion 75 Dec 19, 2022
This repository will contain the code for the CVPR 2021 paper "GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields"

GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields Project Page | Paper | Supplementary | Video | Slides | Blog | Talk If

1.1k Dec 27, 2022
This is an incredibly powerful calculator that is capable of many useful day-to-day functions.

Description 💻 This is an incredibly powerful calculator that is capable of many useful day-to-day functions. Such functions include solving basic ari

Jordan Leich 37 Nov 19, 2022
A Python 3.6+ package to run .many files, where many programs written in many languages may exist in one file.

RunMany Intro | Installation | VSCode Extension | Usage | Syntax | Settings | About A tool to run many programs written in many languages from one fil

6 May 22, 2022
NL-Augmenter 🦎 → 🐍 A Collaborative Repository of Natural Language Transformations

NL-Augmenter 🦎 → 🐍 The NL-Augmenter is a collaborative effort intended to add transformations of datasets dealing with natural language. Transformat

684 Jan 09, 2023
Leon is an open-source personal assistant who can live on your server.

Leon Your open-source personal assistant. Website :: Documentation :: Roadmap :: Contributing :: Story 👋 Introduction Leon is an open-source personal

Leon AI 11.7k Dec 30, 2022
HiFi DeepVariant + WhatsHap workflowHiFi DeepVariant + WhatsHap workflow

HiFi DeepVariant + WhatsHap workflow Workflow steps align HiFi reads to reference with pbmm2 call small variants with DeepVariant, using two-pass meth

William Rowell 2 May 14, 2022
一个基于Nonebot2和go-cqhttp的娱乐性qq机器人

Takker - 一个普通的QQ机器人 此项目为基于 Nonebot2 和 go-cqhttp 开发,以 Sqlite 作为数据库的QQ群娱乐机器人 关于 纯兴趣开发,部分功能借鉴了大佬们的代码,作为Q群的娱乐+功能性Bot 声明 此项目仅用于学习交流,请勿用于非法用途 这是开发者的第一个Pytho

风屿 79 Dec 29, 2022
Open source code for AlphaFold.

AlphaFold This package provides an implementation of the inference pipeline of AlphaFold v2.0. This is a completely new model that was entered in CASP

DeepMind 9.7k Jan 02, 2023
A repo for materials relating to the tutorial of CS-332 NLP

CS-332-NLP A repo for materials relating to the tutorial of CS-332 NLP Contents Tutorial 1: Introduction Corpus Regular expression Tokenization Tutori

Alok singh 9 Feb 15, 2022
Code for ACL 2021 main conference paper "Conversations are not Flat: Modeling the Intrinsic Information Flow between Dialogue Utterances".

Conversations are not Flat: Modeling the Intrinsic Information Flow between Dialogue Utterances This repository contains the code and pre-trained mode

ICTNLP 90 Dec 27, 2022
Code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation".

This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation".

Chenhe Dong 28 Nov 10, 2022
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.4k Dec 30, 2022
Crie tokens de autenticação íntegros e seguros com UToken.

UToken - Tokens seguros. UToken (ou Unhandleable Token) é uma bilioteca criada para ser utilizada na geração de tokens seguros e íntegros, ou seja, nã

Jaedson Silva 0 Nov 29, 2022
A flask application to predict the speech emotion of any .wav file.

This is a speech emotion recognition app. It will allow you to train a modular MLP model with the RAVDESS dataset, and then use that model with a flask application to predict the speech emotion of an

Aryan Vijaywargia 2 Dec 15, 2021
Python library for interactive topic model visualization. Port of the R LDAvis package.

pyLDAvis Python library for interactive topic model visualization. This is a port of the fabulous R package by Carson Sievert and Kenny Shirley. pyLDA

Ben Mabey 1.7k Dec 20, 2022