Skip to content

Any tips importing a custom TensorRT Model for Nvidia Detection? #9161

Answered by meriley
meriley asked this question in Question
Discussion options

You must be logged in to vote

I believe I finally got something working! Using Yolov8 even!
Its not refined by any means, but it compiles and plugs into frigate with no errors.

  1. Create A New Project With Roboflow
  2. Upload Images for detection and label accordingly
  3. Download Model Locally (Don't Bother with Roboflow Training)
  4. Train the model (I believe the training writes a TON of data to the image. So you need to delete the image after every run because its ~14GB after training. I haven't investigated where it writes so I haven't exposed a volume for clean up yet.)
echo '#!/bin/bash

NAME=rex

# Remove Old Generated Model
rm -rf /workspace/models/$NAME

# Train Model
yolo train data=/workspace/$NAME/data.yaml project=/w…

Replies: 3 comments 4 replies

Comment options

NickM-27
Jan 1, 2024
Collaborator Sponsor

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
4 replies
@alexyao2015
Comment options

@alexyao2015
Comment options

@alexyao2015
Comment options

@meriley
Comment options

Answer selected by NickM-27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
4 participants