- ChengDu
- https://zackschen.github.io/
Highlights
- Pro
Block or Report
Block or report zackschen
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
The official implementation of the CVPR'2024 work Interference-Free Low-Rank Adaptation for Continual Learning
A cross-platform ChatGPT/Gemini UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT/Gemini 应用。
CoSCL: Cooperation of Small Continual Learners is Stronger than a Big One (ECCV 2022)
A brain-inspired version of generative replay for continual learning with deep neural networks (e.g., class-incremental learning on CIFAR-100; PyTorch code).
A curated list of Continual Learning papers and BibTeX entries
The code implementation of the <Achieving a Better Stability-Plasticity Trade-off via Auxiliary Networks in Continual Learning> in The Computer Vision and Pattern Recognition Conference (CVPR) 2023
Official PyTorch implementation of our ICCV2023 paper “When Prompt-based Incremental Learning Does Not Meet Strong Pretraining”
[ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models
Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"
robosuite: A Modular Simulation Framework and Benchmark for Robot Learning
Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models".
✌[ICLR 2024] Class Incremental Learning via Likelihood Ratio Based Task Prediction
Benchmarking Knowledge Transfer in Lifelong Robot Learning
Continual Learning of Large Language Models: A Comprehensive Survey
[GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction". An *ultra-simple, user-friendly …
Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding
A high-throughput and memory-efficient inference and serving engine for LLMs
MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
Qwen2 is the large language model series developed by Qwen team, Alibaba Cloud.
CVPR2022: Meta-attention for ViT-backed Continual Learning
A Framework of Small-scale Large Multimodal Models
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
[CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompting for Multimodal Large Language Models" has been accepted …
PyTorch code for the CVPR'23 paper: "CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning"