Stars
Must-read papers on prompt-based tuning for pre-trained language models.
[ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723
This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"
Language Understanding Evaluation benchmark for Chinese: datasets, baselines, pre-trained models,corpus and leaderboard
Pattern-Exploiting Training在中文上的简单实验
Virtual Adversarial Training (VAT) implementation for PyTorch
Adversarial Training for Natural Language Understanding
基于 Tensorflow,仿 Scikit-Learn 设计的深度学习自然语言处理框架。支持 40 余种模型类,涵盖语言模型、文本分类、NER、MRC、知识蒸馏等各个领域
DIAC2019基于Adversarial Attack的问题等价性判别比赛
Exploring mixup strategies for text classification
TensorFlow implementation of On the Sentence Embeddings from Pre-trained Language Models (EMNLP 2020)
Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
A library for efficient similarity search and clustering of dense vectors.
MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification
Public repository for the paper "Learning Sound Event Classifiers from Web Audio with Noisy Labels"
Pytorch implementation of the methods proposed in **Adversarial Training Methods for Semi-Supervised Text Classification** on IMDB dataset
Implementation of the methods proposed in **Adversarial Training Methods for Semi-Supervised Text Classification** on IMDB dataset (without pre-training)
Models and examples built with TensorFlow
A curated list of resources for Learning with Noisy Labels
A Tensorflow (Keras) implementation of Peer loss functions for classification with noisy labels.
Bootstrapping loss function implementation in pytorch