Skip to content

Commit

Permalink
Add support for selecting a specific GPU to use when converting TRT m…
Browse files Browse the repository at this point in the history
  • Loading branch information
NateMeyer committed Sep 21, 2023
1 parent 5d30944 commit 0d6bb67
Show file tree
Hide file tree
Showing 2 changed files with 26 additions and 0 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,15 @@ if [[ -z ${MODEL_CONVERT} ]]; then
exit 0
fi

# Setup ENV to select GPU for conversion
if [ ! -z ${TRT_MODEL_PREP_DEVICE+x} ]; then
if [ ! -z ${CUDA_VISIBLE_DEVICES+x} ]; then
PREVIOUS_CVD="$CUDA_VISIBLE_DEVICES"
unset CUDA_VISIBLE_DEVICES
fi
export CUDA_VISIBLE_DEVICES="$TRT_MODEL_PREP_DEVICE"
fi

# On Jetpack 4.6, the nvidia container runtime will mount several host nvidia libraries into the
# container which should not be present in the image - if they are, TRT model generation will
# fail or produce invalid models. Thus we must request the user to install them on the host in
Expand Down Expand Up @@ -87,5 +96,14 @@ do
echo "Generated ${model}.trt in $(($(date +%s)-start)) seconds"
done

# Restore ENV after conversion
if [ ! -z ${TRT_MODEL_PREP_DEVICE+x} ]; then
unset CUDA_VISIBLE_DEVICES
if [ ! -z ${PREVIOUS_CVD+x} ]; then
export CUDA_VISIBLE_DEVICES="$PREVIOUS_CVD"
fi
fi

# Print which models exist in output folder
echo "Available tensorrt models:"
cd ${OUTPUT_FOLDER} && ls *.trt;
8 changes: 8 additions & 0 deletions docs/docs/configuration/object_detectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,6 +239,14 @@ frigate:
- USE_FP16=false
```

If you have multiple GPUs passed through to Frigate, you can specify which one to use for the model conversion. The conversion script will use the first visible GPU, however in systems with mixed GPU models you may not want to use the default index for object detection. Add the `TRT_MODEL_PREP_DEVICE` environment variable to select a specific GPU.

```yml
frigate:
environment:
- TRT_MODEL_PREP_DEVICE=0 # Optionally, select which GPU is used for model optimization
```

### Configuration Parameters

The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration.md#nvidia-gpu) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.
Expand Down

0 comments on commit 0d6bb67

Please sign in to comment.