-
SUTD
- Singapore
Block or Report
Block or report TianduoWang
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
PyTorch implementation of VALL-E(Zero-Shot Text-To-Speech), Reproduced Demo https://lifeiteng.github.io/valle/index.html
Vector (and Scalar) Quantization, in Pytorch
A generative speech model for daily dialogue.
1 min voice data can also be used to train a good TTS model! (few shot voice cloning)
This repo contains the source code for RULER: What’s the Real Context Size of Your Long-Context Language Models?
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
DialCoT Meets PPO: Decomposing and Exploring Reasoning Paths in Smaller Language Models
Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models
[CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding
The official Python library for the OpenAI API
DeepSeek Coder: Let the Code Write Itself
Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" (ICLR 2024)
[ACL 2023] Learning Multi-step Reasoning by Solving Arithmetic Tasks. https://arxiv.org/abs/2306.01707
A series of large language models developed by Baichuan Intelligent Technology
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
Neural Networks and the Chomsky Hierarchy
Leveraging training data for few-shot prompting
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts…
[ACL2023 Area Chair Award] Official repo for the paper "Tell2Design: A Dataset for Language-Guided Floor Plan Generation".
[ACL 2023] Contextual Distortion Reveals Constituency: Mask Language Models are Implicit Parsers.
Official homepage for Tab-CoT: Zero-shot Tabular Chain of Thought (Findings of ACL 2023)
One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning
Benchmarking large language models' complex reasoning ability with chain-of-thought prompting