Lists (2)
Sort Name ascending (A-Z)
Stars
Language
Sort by: Recently starred
Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
[CVPR'24] DNGaussian: Optimizing Sparse-View 3D Gaussian Radiance Fields with Global-Local Depth Normalization
Use PEFT or Full-parameter to finetune 300+ LLMs or 60+ MLLMs. (Qwen2, GLM4v, Internlm2.5, Yi, Llama3.1, Llava-Video, Internvl2, MiniCPM-V-2.6, Deepseek, Baichuan2, Gemma2, Phi3-Vision, ...)
Happy experimenting with MLLM and LLM models!
An open-source implementation for training LLaVA-NeXT.
Official PyTorch implementation of CODA-LM(https://arxiv.org/abs/2404.10595)
📋 Collection of evaluation code for natural language generation.
LLaVA-NeXT-Image-Llama3-Lora, Modified from https://github.com/arielnlee/LLaVA-1.6-ft
Collection of AWESOME vision-language models for vision tasks
Famous Vision Language Models and Their Architectures
A latent text-to-image diffusion model
PyTorch tutorials, examples and some books I found 【不定期更新】整理的PyTorch 最新版教程、例子和书籍
Materials for the Learn PyTorch for Deep Learning: Zero to Mastery course.
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
An Open-source Toolkit for LLM Development
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Official implementation for "Multimodal Chain-of-Thought Reasoning in Language Models" (stay tuned and more will be updated)
Official implementation for "Automatic Chain of Thought Prompting in Large Language Models" (stay tuned & more will be updated)
Official Implementation of "Graph of Thoughts: Solving Elaborate Problems with Large Language Models"
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.