[Detector Support]: No Event Recordings from Cameras on NVIDIA RTX 3060 with TensorRT, Despite Visible Tracking Boxes #10548
-
Describe the problem you are havingI've been a user of Frigate for some time now and recently decided to deploy it on a machine equipped with an NVIDIA 3060, using the frigate:stable-tensorrt image and running it through docker-compose on Portainer. It worked wonderfully after installation, so I opted to use the NVIDIA GPU as the detector. However, I've encountered an issue where nothing gets recorded anymore. If I disable the NVIDIA GPU as the detector, recording functionality resumes. My setup involves 3 cameras. Below, I've included my configuration and the screenshots. What I'm trying to convey is that, when the following code is activated, everything functions correctly except for the fact that no event recordings are made anymore.
I've also attempted to utilize the configuration specified below, but it resulted in the same issue:
Version0.13.2-6476F8A Frigate config filemqtt:
host: 192.168.0.26
port: 1883
database:
path: /db/frigate.db
ffmpeg:
hwaccel_args: preset-nvidia-h264
output_args:
record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime
1 -c:v copy -c:a aac
ui:
use_experimental: true
detectors:
tensorrt:
type: tensorrt
device: 0 #This is the default, select the first GPU
model:
path: /config/model_cache/tensorrt/8.5.3/yolov7-320.trt
input_tensor: nchw
input_pixel_format: rgb
width: 320
height: 320
snapshots:
# Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
# This value can be set via MQTT and will be updated in startup based on retained value
enabled: true
# Optional: print a timestamp on the snapshots (default: shown below)
timestamp: false
# Optional: draw bounding box on the snapshots (default: shown below)
bounding_box: false
# Optional: crop the snapshot (default: shown below)
crop: false
# Optional: height to resize the snapshot to (default: original size)
#height: 175
# Optional: Restrict snapshots to objects that entered any of the listed zones (default: no required zones)
#required_zones: []
# Optional: Camera override for retention settings (default: global values)
retain:
# Required: Default retention days (default: shown below)
default: 7
# Optional: Per object retention days
objects:
person: 14
birdseye:
enabled: true
mode: continuous
cameras:
# Required: name of the camera
Annke79:
mqtt:
timestamp: false
bounding_box: false
crop: true
quality: 100
height: 500
# Required: ffmpeg settings for the camera
ffmpeg:
# Required: A list of input streams for the camera. See documentation for more information.
#hwaccel_args: preset-nvidia-h265
inputs:
# Required: the path to the stream
# NOTE: path may include environment variables, which must begin with 'FRIGATE_' and be referenced in {}
- path: rtsp:https://user:[email protected]:554/Streaming/Channels/102
roles:
- detect
- rtmp
- path: rtsp:https://user:[email protected]:554/Streaming/Channels/101
roles:
- record
#fps: 9
#width: 1920
#height: 1080
motion:
mask: []
objects:
track:
- person
- bicycle
- car
- motorcycle
- bus
- bird
- cat
- dog
record:
enabled: true
retain:
days: 15
mode: motion
events:
pre_capture: 5
post_capture: 60
#objects:
# - person
retain:
default: 15
mode: active_objects
objects:
person: 15
bicycle: 15
car: 10
motorcycle: 10
bus: 7
bird: 7
cat: 7
dog: 7
cell phone: 15
detect:
stationary:
interval: 50 #interval is defined as the frequency for running detection on stationary objects.
threshold: 50 #threshold is the number of frames an object needs to remain relatively still before it is considered stationary.
# Optional: width of the frame for the input with the detect role (default: shown below)
width: 1200
# Optional: height of the frame for the input with the detect role (default: shown below)
height: 636
# Optional: desired fps for your camera for the input with the detect role (default: shown below)
# NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
fps: 10
# Optional: enables detection for the camera (default: True)
# This value can be set via MQTT and will be updated in startup based on retained value
enabled: true
# Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
#max_disappeared: 25
# Optional: Configuration for stationary object tracking
#stationary:
# Optional: Frequency for confirming stationary objects (default: shown below)
# When set to 0, object detection will not confirm stationary objects until movement is detected.
# If set to 10, object detection will run to confirm the object still exists on every 10th frame.
#interval: 0
# Optional: Number of frames without a position change for an object to be considered stationary (default: 10x the frame rate or 10s)
#threshold: 50
# Optional: Define a maximum number of frames for tracking a stationary object (default: not set, track forever)
# This can help with false positives for objects that should only be stationary for a limited amount of time.
# It can also be used to disable stationary object tracking. For example, you may want to set a value for person, but leave
# car at the default.
# WARNING: Setting these values overrides default behavior and disables stationary object tracking.
# There are very few situations where you would want it disabled. It is NOT recommended to
# copy these values from the example config into your config unless you know they are needed.
#max_frames:
# Optional: Default for all object types (default: not set, track forever)
# default: 3000
# Optional: Object specific values
#objects:
# person: 1000
dahua81:
mqtt:
timestamp: false
bounding_box: false
crop: true
quality: 100
height: 500
# Required: ffmpeg settings for the camera
ffmpeg:
# Required: A list of input streams for the camera. See documentation for more information.
inputs:
# Required: the path to the stream
# NOTE: path may include environment variables, which must begin with 'FRIGATE_' and be referenced in {}
- path: rtsp:https://user:[email protected]:554/cam/realmonitor?channel=1&subtype=2
roles:
- detect
- rtmp
- path: rtsp:https://user:[email protected]:554/live
roles:
- record
#fps: 9
#width: 1920
#height: 1080
motion:
mask:
- 1430,85,1439,31,1859,34,1865,86
objects:
track:
- person
- bicycle
- car
- motorcycle
- bus
- bird
- cat
- dog
- cell phone
record:
enabled: true
retain:
days: 15
mode: motion
events:
pre_capture: 5
post_capture: 60
#objects:
# - person
retain:
default: 15
mode: active_objects
objects:
person: 15
bicycle: 15
car: 10
motorcycle: 10
bus: 7
bird: 7
cat: 7
dog: 7
cell phone: 15
detect:
stationary:
interval: 50 #interval is defined as the frequency for running detection on stationary objects.
threshold: 50 #threshold is the number of frames an object needs to remain relatively still before it is considered stationary. # Optional: width of the frame for the input with the detect role (default: shown below)
width: 1920
# Optional: height of the frame for the input with the detect role (default: shown below)
height: 1080
# Optional: desired fps for your camera for the input with the detect role (default: shown below)
# NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
fps: 6
# Optional: enables detection for the camera (default: True)
# This value can be set via MQTT and will be updated in startup based on retained value
enabled: true
# Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
#max_disappeared: 25
# Optional: Configuration for stationary object tracking
#stationary:
# Optional: Frequency for confirming stationary objects (default: shown below)
# When set to 0, object detection will not confirm stationary objects until movement is detected.
# If set to 10, object detection will run to confirm the object still exists on every 10th frame.
#interval: 0
# Optional: Number of frames without a position change for an object to be considered stationary (default: 10x the frame rate or 10s)
#threshold: 50
# Optional: Define a maximum number of frames for tracking a stationary object (default: not set, track forever)
# This can help with false positives for objects that should only be stationary for a limited amount of time.
# It can also be used to disable stationary object tracking. For example, you may want to set a value for person, but leave
# car at the default.
# WARNING: Setting these values overrides default behavior and disables stationary object tracking.
# There are very few situations where you would want it disabled. It is NOT recommended to
# copy these values from the example config into your config unless you know they are needed.
#max_frames:
# Optional: Default for all object types (default: not set, track forever)
# default: 3000
# Optional: Object specific values
#objects:
# person: 1000
dahua82:
mqtt:
timestamp: false
bounding_box: false
crop: true
quality: 100
height: 500
# Required: ffmpeg settings for the camera
ffmpeg:
# Required: A list of input streams for the camera. See documentation for more information.
inputs:
# Required: the path to the stream
# NOTE: path may include environment variables, which must begin with 'FRIGATE_' and be referenced in {}
- path: rtsp:https://user:[email protected]:554/cam/realmonitor?channel=1&subtype=2
roles:
- detect
- rtmp
- path: rtsp:https://user:[email protected]:554/live
roles:
- record
#fps: 9
#width: 1920
#height: 1080
motion:
mask:
- 1430,85,1439,31,1859,34,1865,86
objects:
track:
- person
- bicycle
- car
- motorcycle
- bus
- bird
- cat
- dog
record:
enabled: true
retain:
days: 15
mode: motion
events:
pre_capture: 5
post_capture: 60
#objects:
# - person
retain:
default: 15
mode: active_objects
objects:
person: 15
bicycle: 15
car: 10
motorcycle: 10
bus: 7
bird: 7
cat: 7
dog: 7
cell phone: 15
detect:
stationary:
interval: 50 #interval is defined as the frequency for running detection on stationary objects.
threshold: 50 #threshold is the number of frames an object needs to remain relatively still before it is considered stationary. # Optional: width of the frame for the input with the detect role (default: shown below)
width: 1920
# Optional: height of the frame for the input with the detect role (default: shown below)
height: 1080
# Optional: desired fps for your camera for the input with the detect role (default: shown below)
# NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
fps: 6
# Optional: enables detection for the camera (default: True)
# This value can be set via MQTT and will be updated in startup based on retained value
enabled: true
# Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
#max_disappeared: 25
# Optional: Configuration for stationary object tracking
#stationary:
# Optional: Frequency for confirming stationary objects (default: shown below)
# When set to 0, object detection will not confirm stationary objects until movement is detected.
# If set to 10, object detection will run to confirm the object still exists on every 10th frame.
#interval: 0
# Optional: Number of frames without a position change for an object to be considered stationary (default: 10x the frame rate or 10s)
#threshold: 50
# Optional: Define a maximum number of frames for tracking a stationary object (default: not set, track forever)
# This can help with false positives for objects that should only be stationary for a limited amount of time.
# It can also be used to disable stationary object tracking. For example, you may want to set a value for person, but leave
# car at the default.
# WARNING: Setting these values overrides default behavior and disables stationary object tracking.
# There are very few situations where you would want it disabled. It is NOT recommended to
# copy these values from the example config into your config unless you know they are needed.
#max_frames:
# Optional: Default for all object types (default: not set, track forever)
# default: 3000
# Optional: Object specific values
#objects: docker-compose file or Docker CLI commandversion: "3.9"
services:
frigate:
container_name: frigate
privileged: true
restart: always
image: ghcr.io/blakeblackshear/frigate:stable-tensorrt
deploy: # <------------- Add this section
resources:
reservations:
devices:
- driver: nvidia
count: 1 # number of GPUs
capabilities: [gpu]
shm_size: "256mb" # update for your cameras based on calculation volumes:
# devices:
# - /dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
volumes:
- /etc/localtime:/etc/localtime:ro
- /home/user/frigate/config.yml:/config/config.yml
- /mnt/frigate/:/media/frigate
- /home/user/frigate:/db
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- "5000:5000"
- "1935:1935" #RTMP feeds
environment:
FRIGATE_RTSP_PASSWORD: "pass" Relevant log output2024-03-19 18:26:04.844576461 [INFO] Preparing Frigate...
2024-03-19 18:26:04.880034160 [INFO] Starting Frigate...
2024-03-19 18:26:06.359606385 [2024-03-19 18:26:06] frigate.app INFO : Starting Frigate (0.13.2-6476f8a)
2024-03-19 18:26:06.532674532 [2024-03-19 18:26:06] peewee_migrate.logs INFO : Starting migrations
2024-03-19 18:26:06.536964430 [2024-03-19 18:26:06] peewee_migrate.logs INFO : There is nothing to migrate
2024-03-19 18:26:06.541475235 [2024-03-19 18:26:06] frigate.app INFO : Recording process started: 738
2024-03-19 18:26:06.543122744 [2024-03-19 18:26:06] frigate.app INFO : go2rtc process pid: 98
2024-03-19 18:26:06.563512183 [2024-03-19 18:26:06] frigate.app INFO : Output process started: 750
2024-03-19 18:26:06.623539940 [2024-03-19 18:26:06] detector.tensorrt INFO : Starting detection process: 748
2024-03-19 18:26:06.706786533 [2024-03-19 18:26:06] frigate.app INFO : Camera processor started for Annke79: 770
2024-03-19 18:26:06.706883067 [2024-03-19 18:26:06] frigate.app INFO : Camera processor started for dahua81: 772
2024-03-19 18:26:06.714366245 [2024-03-19 18:26:06] frigate.app INFO : Camera processor started for dahua82: 773
2024-03-19 18:26:06.714412863 [2024-03-19 18:26:06] frigate.app INFO : Capture process started for Annke79: 775
2024-03-19 18:26:06.714447532 [2024-03-19 18:26:06] frigate.app INFO : Capture process started for dahua81: 779
2024-03-19 18:26:06.714496532 [2024-03-19 18:26:06] frigate.app INFO : Capture process started for dahua82: 783
2024-03-19 18:26:07.076479864 [2024-03-19 18:26:07] frigate.detectors.plugins.tensorrt INFO : Loaded engine size: 72 MiB
2024-03-19 18:26:07.934026282 [2024-03-19 18:26:07] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +854, GPU +360, now: CPU 1366, GPU 687 (MiB)
2024-03-19 18:26:08.098608069 [2024-03-19 18:26:08] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuDNN: CPU +126, GPU +58, now: CPU 1492, GPU 745 (MiB)
2024-03-19 18:26:08.111658921 [2024-03-19 18:26:08] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +70, now: CPU 0, GPU 70 (MiB)
2024-03-19 18:26:08.111742369 [2024-03-19 18:26:08] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 1420, GPU 737 (MiB)
2024-03-19 18:26:08.111788735 [2024-03-19 18:26:08] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 1420, GPU 745 (MiB)
2024-03-19 18:26:08.111888258 [2024-03-19 18:26:08] frigate.detectors.plugins.tensorrt INFO : [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +15, now: CPU 0, GPU 85 (MiB)
2024-03-19 18:26:08.111891332 [2024-03-19 18:26:08] frigate.detectors.plugins.tensorrt WARNING : CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
2024-03-19 18:26:04.845712879 [INFO] Preparing new go2rtc config...
2024-03-19 18:26:05.253807905 [INFO] Starting go2rtc...
2024-03-19 18:26:05.315844059 18:26:05.315 INF go2rtc version 1.8.4 linux/amd64
2024-03-19 18:26:05.316099480 18:26:05.316 INF [rtsp] listen addr=:8554
2024-03-19 18:26:05.316111608 18:26:05.316 INF [api] listen addr=:1984
2024-03-19 18:26:05.316461317 18:26:05.316 INF [webrtc] listen addr=:8555
2024-03-19 18:26:14.851935154 [INFO] Starting go2rtc healthcheck service... Operating systemDebian Install methodDocker Compose Coral versionCPU (no coral) Any other information that may be helpfulNVIDIA-SMI:![]() Docker Portainer:![]() PC:![]() Dahua 81:
Dahua82:
ANNKE79
VAINFO
System![]() Debug mode:In debug mode, when a car or a person passes by on the street, I can see the green and red boxes tracking the person and the car, respectively. However, no recordings are captured. Model path:![]() |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
Your understanding of the boxes is not quite correct. Red box means motion and green box is the region or area sent to the object detector. There will be a separate box for the object itself that will include a label, score, and area. Most likely objects are not being detected. Your driver is out of date, like it says in the docs driver 535 or newer is required. Once you update the driver the model will need to be deleted and regenerated |
Beta Was this translation helpful? Give feedback.
Your understanding of the boxes is not quite correct. Red box means motion and green box is the region or area sent to the object detector. There will be a separate box for the object itself that will include a label, score, and area. Most likely objects are not being detected.
Your driver is out of date, like it says in the docs driver 535 or newer is required. Once you update the driver the model will need to be deleted and regenerated