Stars
Official repository of Evolutionary Optimization of Model Merging Recipes
Collection of AWESOME vision-language models for vision tasks
Code release for "LogME: Practical Assessment of Pre-trained Models for Transfer Learning" (ICML 2021) and Ranking and Tuning Pre-trained Models: A New Paradigm for Exploiting Model Hubs (JMLR 2022)
This is a collection of our zero-cost NAS and efficient vision applications.
Weakly-Supervised-Learning, Semantic Segmentation, CVPR 2023
A paper list about collaborative perception.
A Repository for Single- and Multi-modal Speaker Verification, Speaker Recognition and Speaker Diarization
Reading list for research topics in multimodal machine learning
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
This is a list of awesome paper about optical flow and related work.
Exploring High-quality Target Domain Information for Unsupervised Domain Adaptive Semantic Segmentation
A collection of AWESOME things about domian adaptation
Code from the paper "DACS: Domain Adaptation via Cross-domain Mixed Sampling"
PyTorch implementation of Contrastive Learning methods
OpenMMLab Self-Supervised Learning Toolbox and Benchmark
Region-aware Contrastive Learning for Semantic Segmentation, ICCV 2021
ICCV2021 (Oral) - Exploring Cross-Image Pixel Contrast for Semantic Segmentation
A curated list of research papers in exploring causality in vision. Link to the code if available is also present.
Semi-supervised Domain Adaptation via Minimax Entropy
Code for <Domain Adaptive Video Segmentation via Temporal Consistency Regularization> in ICCV 2021
Vehicle-Rear: A New Dataset to Explore Feature Fusion For Vehicle Identification Using Convolutional Neural Networks
Official Pytorch Implementation of: "Asymmetric Loss For Multi-Label Classification"(ICCV, 2021) paper
Per-Pixel Classification is Not All You Need for Semantic Segmentation (NeurIPS 2021, spotlight)
[ECCV'20 Spotlight] Memory-augmented Dense Predictive Coding for Video Representation Learning. Tengda Han, Weidi Xie, Andrew Zisserman.