Skip to content
View whlzy's full-sized avatar
🌏
Focus
🌏
Focus
Block or Report

Block or report whlzy

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
whlzy/README.md

Hi πŸ‘‹, I'm Lu Zeyu

whlzy

  • πŸ”­ I’m currently working on AIGC and training some interesting transformers.

  • 🌠 Currently, the maximum model size I have trained from scratch is 3B, using 128 A100 GPUs. Looking forward to the opportunity to use more GPUs to train larger models in the future!

  • πŸ“« How to reach me: [email protected]

  • ⚑ I like ACG(animation, comic and game). I want to help artists, painters and designers with deep learning. If you have any interesting ideas, please contact me.

πŸ“Š Last week I spent my time on

Other   1 hr 57 mins    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   100.00 %

Pinned Loading

  1. FiT FiT Public

    [ICML 2024 Spotlight] FiT: Flexible Vision Transformer for Diffusion Model

    348 7

  2. Inf-imagine/Sentry Inf-imagine/Sentry Public

    [NeurIPS 2023] Sentry-Image: Detect Any AI-generated Images

    87 1

  3. Hierarchical-Diffusion-Autoencoders Hierarchical-Diffusion-Autoencoders Public

    [WACV 2023] Hierarchical Diffusion Autoencoders and Disentangled Image Manipulation

    9

  4. pi-Tuning pi-Tuning Public

    Forked from TencentARC/pi-Tuning

    [ICML 2023] pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation

    Python 1

  5. LLaMA-Pro LLaMA-Pro Public

    Forked from TencentARC/LLaMA-Pro

    [ACL 2024] Progressive LLaMA with Block Expansion.

    Python