Skip to content

Latest commit

 

History

History
102 lines (81 loc) · 4.11 KB

File metadata and controls

102 lines (81 loc) · 4.11 KB

Jetson-Example: Run Depth Anything on NVIDIA Jetson Orin 🚀

This project provides an one-click deployment of the Depth Anything monocular depth estimation model developed by Hong Kong University and ByteDance. The deployment is visualized on reComputer J4012 (Jetson Orin NX 16GB, 100 TOPS) and includes a WebUI for model conversion to TensorRT and real-time depth estimation.

WebUI

All models and inference engine implemented in this project are from the official Depth Anything.

🔥Features

  • One-click deployment for Depth Anything models.

  • WebUI for model conversion and depth estimation.

  • Support for uploading videos/images or using the local camera

  • Supports S, B, L models of Depth Anything with input sizes of 308, 384, 406, and 518.

    🗝️WebUI Features

    • Choose model: Select from depth_anything_vits14 models. (S, B, L)
    • Choose input size: Select the desired input size.(308, 384, 406, 518)
    • Grayscale option: Option to use grayscale.
    • Choose source: Select the input source (Video, Image, Camera).
    • Export Model: Automatically download and convert the model from PyTorch (.pth) to TensorRT format.
    • Start Estimation: Begin depth estimation using the selected model and input source.
    • Stop Estimation: Stop the ongoing depth estimation process.

    Depthanything

🥳Getting Started

📜Prerequisites

  • reComputer J4012 (🛒Buy Here)
  • Docker installed on reComputer
  • USB Camera (optional)

🚀Installation

PyPI(recommend)

pip install jetson-examples

Linux (github trick)

curl -fsSL https://raw.githubusercontent.com/Seeed-Projects/jetson-examples/main/install.sh | sh

Github (for Developer)

git clone https://github.com/Seeed-Projects/jetson-examples
cd jetson-examples
pip install .

📋Usage

  1. Run code:

    reComputer run depth-anything
  2. Open a web browser and input http:https://{reComputer ip}:5000. Use the WebUI to select the model, input size, and source.

  3. Click on Export Model to download and convert the model.

  4. Click on Start Estimation to begin the depth estimation process.

  5. View the real-time depth estimation results on the WebUI.

⛏️Applications

  • Security: Enhance surveillance systems with depth perception.

    Security

  • Autonomous Driving: Improve environmental sensing for autonomous vehicles.

    Autonomous Driving

  • Underwater Scenes: Apply depth estimation in underwater exploration.

    Underwater Scenes

  • Indoor Scenes: Use depth estimation for indoor navigation and analysis.

    Indoor Scenes

Further Development 🔧

🙏🏻Contributing

We welcome contributions from the community. Please fork the repository and create a pull request with your changes.

✅License

This project is licensed under the MIT License.

🏷️Acknowledgements