-
The Hong Kong University of Science and Technology
- Hong Kong SAR, China
-
12:36
(UTC +08:00) - https://yjiangcm.github.io/
- @Yuxin_Jiang_
- https://scholar.google.com/citations?user=QnfcEEcAAAAJ
Block or Report
Block or report YJiangcm
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
All available datasets for Instruction Tuning of Large Language Models
Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform
Must-read papers, related blogs and API tools on the pre-training and tuning methods for ChatGPT.
LLaMA: Open and Efficient Foundation Language Models
Code for the paper Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation (CVPR 2023).
mavonEditor - A markdown editor based on Vue that supports a variety of personalized features
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference,…
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
Code for "Lion: Adversarial Distillation of Proprietary Large Language Models (EMNLP 2023)"
A Better HKUST LaTeX Thesis Template
Open Academic Research on Improving LLaMA to SOTA LLM
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
The official gpt4free repository | various collection of powerful language models
Use ChatGPT to summarize the arXiv papers. 全流程加速科研,利用chatgpt进行论文全文总结+专业翻译+润色+审稿+审稿回复
An open-source tool-augmented conversational language model from Fudan University
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts…
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
Aligning pretrained language models with instruction data generated by themselves.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
GPT4All: Chat with Local LLMs on Any Device
Instruction Tuning with GPT-4
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
Instruct-tune LLaMA on debate data; Light-weight DebateGPT
骆驼(Luotuo): Open Sourced Chinese Language Models. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.