Skip to content
View Martinser's full-sized avatar
🎯
Focusing
🎯
Focusing

Block or report Martinser

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Stars

MAE

10 repositories

[NeurIPS2022] Official implementation of the paper 'Green Hierarchical Vision Transformer for Masked Image Modeling'.

Python 169 6 Updated Jan 16, 2023

A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).

767 53 Updated Jul 10, 2024
Python 149 7 Updated May 25, 2023

(ICLR 2023) Official PyTorch implementation of "What Do Self-Supervised Vision Transformers Learn?"

Jupyter Notebook 98 8 Updated Mar 13, 2024

Official Implementation of Attentive Mask CLIP (ICCV2023, https://arxiv.org/abs/2212.08653)

Python 19 3 Updated May 29, 2024
34 Updated Apr 13, 2023

CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet

Python 204 7 Updated Dec 16, 2022

Code release for SLIP Self-supervision meets Language-Image Pre-training

Python 743 67 Updated Feb 9, 2023

MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022

Python 545 58 Updated Dec 13, 2022

[ECCV 2024] PyTorch implementation of CropMAE, introduced in "Efficient Image Pre-Training with Siamese Cropped Masked Autoencoders"

Jupyter Notebook 44 3 Updated Jul 8, 2024