Stars
Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.
A repository for research on medium sized language models.
QLoRA: Efficient Finetuning of Quantized LLMs
NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day
Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"
Code for "Threat Scenarios and Best Practices for Neural Fake News Detection: A Case Study on COVID"
This repo contains the code for our paper EvEntS ReaLM, to-appear in EMNLP 2022.
Official repository for the paper "Question Answering Infused Pre-training of General-Purpose Contextualized Representations" by Robin Jia, Mike Lewis, and Luke Zettlemoyer.
OmniXAI: A Library for eXplainable AI
https://huyenchip.com/ml-interviews-book/
A repository consisting of useful terminal commands required in daily tasks to reduce stackoverflow searches.
Scripts to download files and folders programmatically from Google Drive
Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
Morphological Inflection for Low-Resource Languages using cross-lingual transfer
Data and code for "A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization" (ACL 2020)
Google Research
Significance test of increase in correlation for NLP evaluations in python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
Hummingbird compiles trained ML models into tensor computation for faster inference.
gbegus / fiwGAN-ciwGAN
Forked from chrisdonahue/waveganfiwGAN/ciwGAN (Featural and Categorical InfoWaveGAN): Generative Adversarial Phonology and Semantics
Python port of Moses tokenizer, truecaser and normalizer