Stars
搜索、推荐、广告、用增等工业界实践文章收集(来源:知乎、Datafuntalk、技术公众号)
[AAAI 2023] Exploring CLIP for Assessing the Look and Feel of Images
something like visual-chatgpt, 文心一言的开源版
Convolutional Neural Networks to predict the aesthetic and technical quality of images.
A comprehensive collection of IQA papers
Low Quality Image Detection using Machine Learning
A framework for few-shot evaluation of language models.
Real-time face swap for PC streaming or video calls
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
A Gradio web UI for Large Language Models.
The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.
AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation
Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight
Training LLMs with QLoRA + FSDP
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficie…
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Use PEFT or Full-parameter to finetune 350+ LLMs or 100+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-V…
Official implementation of "Neuralangelo: High-Fidelity Neural Surface Reconstruction" (CVPR 2023)
Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
Bark Voice Cloning and Voice Cloning for Chinese Speech
Code for the paper "Evaluating Large Language Models Trained on Code"