-
University of Central Florida
- Orlando, FL
-
23:22
(UTC -04:00) - in/xinran-tang-b6295624b
Highlights
- Pro
Lists (11)
Sort Name ascending (A-Z)
Stars
A curated list of resources for using LLMs to develop more competitive grant applications.
This repo contains all the ROS2 driver packages modified at AI4CE lab for working with various robots
A curated list of reinforcement learning with human feedback resources (continually updated)
Topological Semantic Graph Memory for Image Goal Navigation (CoRL 2022 oral)
code for TIDEE: Novel Room Reorganization using Visuo-Semantic Common Sense Priors
Implementation of paper - YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information
All course files for the Docker Crash Course tutorial on the Net Ninja site & YouTube channel.
[ECCV 2024] The official code of paper "Open-Vocabulary SAM".
Official implementation for the paper "Deep ViT Features as Dense Visual Descriptors".
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
[ICCV 2023] Tracking Anything with Decoupled Video Segmentation
CARLsim is an efficient, easy-to-use, GPU-accelerated software framework for simulating large-scale spiking neural network (SNN) models with a high degree of biological detail.
VMamba: Visual State Space Models,code is based on mamba
Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"
This repository contains the codebase and all the relevant information for our IJCV paper on VPR-Bench: An open-source Visual Place Recognition Evaluation Framework
This repo accompanies the research paper, ARKitScenes - A Diverse Real-World Dataset for 3D Indoor Scene Understanding Using Mobile RGB-D Data and contains the data, scripts to visualize and proces…
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
[ITSC'23] Code for 'Occupancy Prediction-Guided Neural Planner for Autonomous Driving'
Dobb·E: An open-source, general framework for learning household robotic manipulation
An open source framework for research in Embodied-AI from AI2.
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
Official GitHub Repository for Paper "Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill", ICRA 2024
The official repo for [TPAMI'23] "Vision Transformer with Quadrangle Attention"