Block or Report
Block or report chhluo
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (2)
Sort Name ascending (A-Z)
Stars
Language
Sort by: Recently starred
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)
Papers and Datasets on Instruction Tuning and Following. ✨✨✨
🤯 Lobe Chat - an open-source, modern-design LLMs/AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / Bedrock / Azure / Mistral / Perplexity ), Multi-Modals (Vision…
Code for "FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models (ACL 2024)"
Qwen2 is the large language model series developed by Qwen team, Alibaba Cloud.
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Summarize existing representative LLMs text datasets.
SuperCLUE: 中文通用大模型综合性基准 | A Benchmark for Foundation Models in Chinese
A collection of AWESOME things about mixture-of-experts
A family of open-sourced Mixture-of-Experts (MoE) Large Language Models
GPT4V-level open-source multi-modal model based on Llama3-8B
All the resources you need to get to Senior Engineer and beyond
List of books, blogs, newsletters and people!
⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training
📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).
✨✨Latest Advances on Multimodal Large Language Models
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的可商用开源多模态对话模型
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Collection of AWESOME vision-language models for vision tasks
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
(AAAI 2023 oral) Original implementation and experiment results of T2G-FORMER
[ICLR 2024 spotlight] Making Pre-trained Language Models Great on Tabular Prediction
(EMNLP 2023 Findings) Text2Tree: Aligning Text Representation to the Label Tree Hierarchy for Imbalanced Medical Classification.
Official release of InternLM2.5 7B base and chat models. 1M context support
This repo includes ChatGPT prompt curation to use ChatGPT better.