Block or Report
Block or report llq20133100095
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (2)
Sort Name ascending (A-Z)
Stars
Language
Sort by: Recently starred
This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Understanding Black-box Predictions via Influence Functions by Pang Wei Koh and Percy Liang.
Code release for "LogME: Practical Assessment of Pre-trained Models for Transfer Learning" (ICML 2021) and Ranking and Tuning Pre-trained Models: A New Paradigm for Exploiting Model Hubs (JMLR 2022)
《开源大模型食用指南》基于Linux环境快速部署开源大模型,更适合中国宝宝的部署教程
基于 B 站评论区数据构建大语言模型训练用对话数据集
StableSwarmUI, A Modular Stable Diffusion Web-User-Interface, with an emphasis on making powertools easily accessible, high performance, and extensibility.
Z-Bench 1.0 by 真格基金:一个麻瓜的大语言模型中文测试集。Z-Bench is a LLM prompt dataset for non-technical users, developed by an enthusiastic AI-focused team in Zhenfund.
Open-Sora: Democratizing Efficient Video Production for All
OpenDiT: An Easy, Fast and Memory-Efficient System for DiT Training and Inference
Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding
🔊 Text-Prompted Generative Audio Model
assistant tools for attention visualization in deep learning
1 min voice data can also be used to train a good TTS model! (few shot voice cloning)
Code for "MixMatch - A Holistic Approach to Semi-Supervised Learning"
PseudoLabel 2013, VAT, PI model, Tempens, MeanTeacher, ICT, MixMatch, FixMatch
[CVPR 2024] MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"
An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"
ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
A series of large language models developed by Baichuan Intelligent Technology
Playing Pokemon Red with Reinforcement Learning
LAVIS - A One-stop Library for Language-Vision Intelligence
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU
[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
finetune gpt-3.5-turbo data in a few clicks