Lists (2)
Sort Name ascending (A-Z)
Stars
AIGC-interview/CV-interview/LLMs-interview面试问题与答案集合仓,同时包含工作和科研过程中的新想法、新问题、新资源与新项目
A one-stop data processing system to make data higher-quality, juicier, and more digestible for (multimodal) LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大模型提供更高质量、更丰富、更易”消化“的数据!
Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)
Papers and Datasets on Instruction Tuning and Following. ✨✨✨
🤯 Lobe Chat - an open-source, modern-design AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / Azure / DeepSeek), Knowledge Base (file upload / knowledge manageme…
Code for "FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models (ACL 2024)"
Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Summarize existing representative LLMs text datasets.
SuperCLUE: 中文通用大模型综合性基准 | A Benchmark for Foundation Models in Chinese
A collection of AWESOME things about mixture-of-experts
A family of open-sourced Mixture-of-Experts (MoE) Large Language Models
GPT4V-level open-source multi-modal model based on Llama3-8B
All the resources you need to get to Senior Engineer and beyond
List of books, blogs, newsletters and people!
⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)
📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).
✨✨Latest Advances on Multimodal Large Language Models
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Collection of AWESOME vision-language models for vision tasks
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
(AAAI 2023 oral) Original implementation and experiment results of T2G-FORMER
[ICLR 2024 spotlight] Making Pre-trained Language Models Great on Tabular Prediction
(EMNLP 2023 Findings) Text2Tree: Aligning Text Representation to the Label Tree Hierarchy for Imbalanced Medical Classification.