Block or Report
Block or report DevinWang1997
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Build your neural network easy and fast, 莫烦Python中文教学
PyTorch Tutorial for Deep Learning Researchers
A Python 3 programming tutorial for beginners.
[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.
This is a Phi-3 book for getting started with Phi-3. Phi-3, a family of open AI models developed by Microsoft. Phi-3 models are the most capable and cost-effective small language models (SLMs) avai…
This is Microsoft-Phi-3-NvidiaNIMWorkshop
Transformer Explained: Learn How LLM Transformer Models Work with Interactive Visualization
Understanding Deep Learning - Simon J.D. Prince
FILM: Frame Interpolation for Large Motion, In ECCV 2022.
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
[Embodied-AI-Survey-2024] Paper list and projects for Embodied AI
Open source implementation of CVPR 2020 "Video to Events: Recycling Video Dataset for Event Cameras"
OpenMMLab Detection Toolbox and Benchmark
A curated list of foundation models for vision and language tasks
Awesome Object Detection based on handong1587 github: https://handong1587.github.io/deep_learning/2015/10/09/object-detection.html
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
Utilities intended for use with Llama models.
Implementation of "Recurrent Vision Transformers for Object Detection with Event Cameras". CVPR 2023
This repository contains demos I made with the Transformers library by HuggingFace.
全网最全-币圈区块链各类常用工具与相关信息资料大全-虚拟加密货币-欧易OKX币安Binace芝麻开门Gate-App注册-NFT-Defi-加密钱包-比特币-新手入门教程 -持续更新
Collection of AWESOME vision-language models for vision tasks
Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation