Stars
Official Pytorch Implementation of Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think
Official inference repo for FLUX.1 models
[NeurIPS'23] Emergent Correspondence from Image Diffusion
MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation
Open-Sora: Democratizing Efficient Video Production for All
Official implementation of Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs (ICLR 2024).
PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
[CVPR2024] StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On
A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.