Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
-
Updated
Oct 30, 2024 - Python
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
OpenMMLab Pre-training Toolbox and Benchmark
OpenMMLab Self-Supervised Learning Toolbox and Benchmark
PASSL包含 SimCLR,MoCo v1/v2,BYOL,CLIP,PixPro,simsiam, SwAV, BEiT,MAE 等图像自监督算法以及 Vision Transformer,DEiT,Swin Transformer,CvT,T2T-ViT,MLP-Mixer,XCiT,ConvNeXt,PVTv2 等基础视觉算法
PyTorch implementation of "data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language" from Meta AI
BE IT Final Year Machine Learning Project
This library provides packages on DoubleML / Causal Machine Learning and Neural Networks in Python for Simulation and Case Studies.
DepthLens: This is a demo for DPT Beit-Large-512 used to estimate the depth of objects in images.
Example application for training Microsofts's pretrained BEiT image transformer model on a new image classification task
This is a warehouse for Agent-Attention-Models based on pytorch framework, can be used to train your image datasets.
Simple Deep Learning model for training image classifiers using the BEiT method
Add a description, image, and links to the beit topic page so that developers can more easily learn about it.
To associate your repository with the beit topic, visit your repo's landing page and select "manage topics."