Skip to content

This Repository is "SSL for Image Representation", one of the OpenLab of the PseudoLab.

Notifications You must be signed in to change notification settings

Pseudo-Lab/OpenLab_SSL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 

Repository files navigation

SSL for Image Representation

This repository is SSL for Image Representation, one of the OpenLab's PseudoLab.

Introduce page
Every Monday at 10pm, PseudoLab Discord Room YL!

Contributor

idx Date Presenter Review or Resource(Youtube) Paper / Code
1 2023.03.20 Wongi Park OT OT
2 2023.03.27 Wongi Park Youtube / Resource Spark (ICLR 2023) / CODE
3 2023.04.03 Jaehyeong Chun Resource VICReg (ICLR 2022) / CODE
4 2023.04.10 Haemun Kim Resource SSOD (ArXiv 2023) / CODE
5 2023.04.17 Wongi Park Youtube / Resource MixMAE (CVPR 2023)
6 2023.04.24 Jaehyeong Chun Youtube / Resource DINO (ICCV 2021) / CODE
7 2023.05.01 Haemun Kim Youtube / Resource UPL (CVPR 2022) / CODE
8 2023.05.08 Dongryeol Lee Youtube / Resource RC-MAE (ICLR 2023) / CODE
9 2023.05.22 Wongi Park Youtube / Resource iTPN (CVPR 2023) / CODE
10 2023.05.29 Jaehyeong Chun Youtube / Resource iBOT (ICLR 2022) / CODE
11 2023.06.05 Haemun Kim Youtube / Resource ARSL (CVPR 2023) / CODE
12 2023.06.12 Dongryeol Lee Youtube / Resource (NIPS 2022)
13 2023.08.28 Wongi Park OT OT
14 2023.09.04 Wongi Park Youtube / Resource CDS (ICCV 2021) / CODE

Table of Contents

Survey and Analysis

  • [ Analysis ] Unsupervised Deep Embedding for Clustering Analysis. (ICML 2016) [Paper] [CODE]
  • [ Analysis ] Revisiting self-supervised visual representation learning (CVPR 2019) [Paper] [CODE]
  • [ Analysis ] What Makes for Good Views for Contrastive Learning? (NIPS 2020) [Paper]
  • [ Analysis ] A critical analysis of self-supervision, or what we can learn from a single image (ICLR 2020) [Paper]
  • [ Analysis ] How Useful is Self-Supervised Pretraining for Visual Tasks? (CVPR 2020) [Paper] [CODE]
  • [ Analysis ] How Well Do Self-Supervised Models Transfer? (CVPR 2021) [Paper]
  • [ Analysis ] Understanding Dimensional Collapse in Contrastive Self-supervised Learning (ICLR 2022) [Paper]
  • [ Analysis ] Revealing the Dark Secrets of Masked Image Modeling (CVPR 2023) [Paper]
  • [ Analysis ] What do Self-Supervised Vision Transformers Learn? (ICLR 2023) [Paper]

Contrastive & Distillation Learninig

  • [ TraS ] Transitive Invariance for Self-supervised Visual Representation Learning. (ICCV 2017) [Paper]
  • [ NonID ] Unsupervised Feature Learning via Non-parameteric Instance Discrimination. (CVPR 2018) [Paper] [CODE]
  • [ MoCo ] Momentum Contrast for Unsupervised Visual Representation Learning (CVPR 2019) [Paper] [CODE]
  • [ MoCoV2 ] Improved Baselines with Momentum Contrastive Learning (ArXiv 2020) [Paper] [CODE]
  • [ SimCLR ] A Simple Framework for Contrastive Learning of Visual Representations (ICML 2020) [Paper] [CODE]
  • [ SimCLRv2 ] Big Self-Supervised Models are Strong Semi-Supervised Learners (NIPS 2020) [Paper] [CODE]
  • [ SwAV ] Unsupervised Learning of Visual Features by Contrasting Cluster Assignments (NIPS 2020) [Paper] [CODE]
  • [ Reasoning ] Self-Supervised Relational Reasoning for Representation Learning (NIPS 2020) [Paper] [CODE]
  • [ PIRL ] Self-Supervised Learning of Pretext-Invariant Representations (CVPR 2020) [Paper] [CODE]
  • [ SEED ] SEED: Self-supervised Distillation For Visual Representation (ICLR 2021) [Paper] [CODE]
  • [ SimSiam ] Exploring Simple Siamese Representation Learning. (CVPR 2021) [Paper] [CODE]
  • [ PixPro ] Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning. (CVPR 2021) [Paper] [CODE]
  • [ BYOL ] Bootstrap Your Own Latent A New Approach to Self-Supervised Learning (NIPS 2020) [Paper] [CODE]
  • [ RoCo ] Robust Contrastive Learning Using Negative Samples with Diminished Semantics (NIPS 2021) [Paper] [CODE]
  • [ ImCo ] Improving Contrastive Learning by Visualizing Feature Transformation (ICCV 2021) [Paper] [CODE]
  • [ DINO ] Emerging Properties in Self-Supervised Vision Transformers (ICCV 2021) [Paper] [CODE]
  • [ Barlow Twins ] Barlow Twins: Self-Supervised Learning via Redundancy Reduction (ICML 2021) [Paper] [CODE]
  • [ VICReg ] VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning (ICLR 2022) [Paper] [CODE]
  • [ E-SSL ] E-SSL: Equivariant Contrastive Learning (ICLR 2022) [Paper] [CODE]
  • [ TriBYOL ] TriBYOL: Triplet BYOL for Self-Supervised Representation Learning (ICASSP 2022) [Paper]
  • [ DINOv2 ] DINOv2: Learning Robust Visual Features without Supervision (ArXiv 2023) [Paper] [CODE]
  • [ AVT ] AVT: Unsupervised Learning of Transformation Equivariant Representations by Autoencoding Variational Transformations (CVPR 2018) [Paper] [CODE]
  • [ MoCHI ] MoCHI: Hard Negative Mixing for Contrastive Learning (NIPS 2020) [Paper] [CODE]
  • [ SMDistill ] Unsupervised Representation Transfer for Small Networks: I Believe I Can Distill On-the-Fly (NIPS 2021) [Paper]
  • [ BURN ] BURN: Unsupervised Representation Learning for Binary Networks by Joint Classifier Training (CVPR 2022) [Paper] [CODE]
  • [ DenseCL ] Dense Contrastive Learning for Self-Supervised Visual Pre-Training (CVPR 2021) [Paper] [CODE]
  • [ RINCE ] Robust Contrastive Learning against Noisy Views (CVPR 2021) [Paper] [CODE]
  • [ SEED ] SEED: Self-supervised Distillation For Visual Representation (ICLR 2021) [Paper] [CODE]

Masked Auto Encoder

  • [ MAE ] Masked Autoencoders Are Scalable Vision Learners (CVPR 2020) [Paper] [CODE]
  • [ MST ] MST: Masked Self-Supervised Transformer for Visual Representation (NIPS 2021) [Paper]
  • [ SimMiM ] SimMIM: A Simple Framework for Masked Image Modeling (CVPR 2021) [Paper] [CODE]
  • [ Adios ] Adversarial Masking for Self-Supervised Learning (ICML 2022) [Paper] [CODE]
  • [ iBOT ] iBOT 🤖: Image BERT Pre-Training with Online Tokenizer (ICLR 2022) [Paper] [CODE]
  • [ BEiT ] BEiT: BERT Pre-Training of Image Transformers (ICLR 2022) [Paper] [CODE]
  • [ DMAE ] Denoising Masked AutoEncoders Help Robust Classification (ICLR 2023) [Paper] [CODE]
  • [ AttnMask ] What to Hide from Your Students: Attention-Guided Masked Image Modeling (ECCV 2022) [Paper] [CODE]
  • [ SparK ] Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling (ICLR 2023) [Paper] [CODE]
  • [ CIM ] Corrupted Image Modeling for Self-Supervised Visual Pre-Training (ICLR 2023) [Paper]
  • [ MixAE ] Mixed Autoencoder for Self-supervised Visual Representation Learning (CVPR 2023) [Paper]
  • [ MixMIM ] MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers (CVPR 2023) [Paper] [CODE]
  • [ DropMAE ] DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks (CVPR 2023) [Paper] [CODE]
  • [ iTPN ] Integrally Pre-Trained Transformer Pyramid Networks. (CVPR 2023) [Paper] [CODE]
  • [ ConMIM ] Masked Image Modeling with Denoising Contrast. (ICLR 2023) [Paper] [CODE]
  • [ MultiMAE ] MultiMAE: Multi-modal Multi-task Masked Autoencoders. (ICLR 2023) [Paper] [CODE]
  • [ LCO ] Learning to cluster in order to transfer across domains and tasks (ICLR 2018) [Paper] [CODE]
  • [ TinyMIM ] TinyMIM: An Empirical Study of Distilling MIM Pre-trained Models. (CVPR2023) [Paper] [CODE]

Image Transformation

  • [ JisawNet ] Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles. (ECCV 2016) [Paper] [CODE]
  • [ Colorful ] Colorful Image Colorization. (ECCV 2016) [Paper] [CODE]
  • [ Colorfulv2] Colorization as a Proxy Task for Visual Understanding. (CVPR 2017) [Paper] [CODE]
  • [ DeepPermNet] DeepPermNet: Visual Permutation Learning. (CVPR 2017) [Paper] [CODE]
  • [ NAT ] Unsupervised Learning by Predicting Noise. (ICML 2017) [Paper] [CODE]
  • [ OPN ] Unsupervised Representation Learning by Sorting Sequences. (ICCV 2017) [Paper] [CODE]
  • [ Damage JisawNet ] Learning Image Representations by Completing Damaged Jigsaw Puzzles. (WACV 2018) [Paper] [CODE]
  • [ Rotation ] Unsupervised Representation Learning by Predicting Image Rotations. (ICLR 2018) [Paper] [CODE]

Vision Language Model

  • [ SINC ] SINC: Self-Supervised In-Context Learning for Vision-Language Tasks (ICCV 2023) [Paper]

Domain Generalization

  • [ CDS ] CDS: Cross-Domain Self-supervised Pre-training (ICCV 2021) [Paper] [CODE]
  • [ Deja Vu ] Deja Vu: Continual Model Generalization for Unseen Domains (ICLR 2023) [Paper] [CODE]
  • [ FlexPredict ] Predicting masked tokens in stochastic locations improves masked image modeling (ArXiv 2023) [Paper]

Anomaly Detection

  • [ CutPaste ] CutPaste: Self-Supervised Learning for Anomaly Detection and Localization (CVPR 2021) [Paper] [CODE]
  • [ SPot ] SPot-the-Difference Self-Supervised Pre-training for Anomaly Detection and Segmentation (ECCV 2022) [Paper] [CODE]

Multi-task learning

  • [ MuST ] Multi-Task Self-Training for Learning General Representations (ICCV 2021) [Paper]
  • [ SMART ] SMART: Self-supervised Multi-task pretrAining with contRol Transformers (ICLR 2023) [Paper] [CODE]

Few-shot learning

  • [ Few-shot ] When Does Self-supervision Improve Few-shot Learning? (ECCV 2020) [Paper] [CODE]
  • [ Pareto ] Pareto Self-Supervised Training for Few-Shot Learning (CVPR 2021) [Paper]

Clustering

  • [ JULE ] Joint Unsupervised Learning of Deep Representations and Image Clusters. (CVPR 2016) [Paper] [CODE]
  • [ Deep Cluster ] Deep Clustering for Unsupervised Learning of Visual Features (ECCV 2018) [Paper] [CODE]
  • [ Self Cluster ] Self-labelling via simultaneous clustering and representation learning (ICLR 2020) [Paper] [CODE]
  • [ ClusterFit ] Improving Generalization of Visual Representations (CVPR 2020) [Paper]
  • [ SCAN ] SCAN: Learning to Classify Images without Labels (ECCV 2020) [Paper] [CODE]
  • [ MisMatch ] Mitigating embedding and class assignment mismatch in unsupervised image classification (ECCV 2020) [Paper] [CODE]
  • [ RUC ] Improving Unsupervised Image Clustering With Robust Learning (CVPR 2021) [Paper] [CODE]
  • [ MICE ] MiCE: Mixture of Contrastive Experts for Unsupervised Image Clustering (ICLR 2021) [Paper] [CODE]
  • [ GATCluster ] GATCluster: Self-Supervised Gaussian-Attention Network for Image Clustering (ECCV 2020) [Paper]
  • [ Jigsaw Cluster ] Jigsaw Clustering for Unsupervised Visual Representation Learning (CVPR 2021) [Paper] [CODE]

Blog and Resource

Dataset

About

This Repository is "SSL for Image Representation", one of the OpenLab of the PseudoLab.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published