You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just wanted to give a shout out to the developers, Frigate is EXCELLENT!!
I started on Tuesday to install this, having spent literally a couple months trying to get other NVRs to do what I wanted. Today's Thursday, and I've got it set up with the bells and whistles I wanted. A VAST improvement over the alternatives.
By implementing an algorithm that "chases" an object and refines the detection as it does so, Frigate is architecturally FAR superior to other systems. Also it's Linux, which is what a production server should be (if not UNIX).
I have it installed in an Ubuntu 22.04 VM on top of Proxmox 7.4, with Tesla P4 GPU passed through. With yolov7x-320.trt model, inference time is about 20ms. I could probably get that down with a smaller model, but if my math is right, I'm good for 10 cameras at 5FPS. I think that's enough. Only have 4 configured, and three more to install.
Learning points:
Waste of time buying cameras with built in AI - nowhere near as flexible as Tensorrt (or Coral, I should imagine, but not tried). The Amcrest IP8M-2669-AIs I have were a few bucks more expensive than the non-AI version, and this was wasted money now.
To use Tensorrt, the docker image has to have the -tenssorrt tag and is much larger than non-tensorrt. I discovered this through Google hits on this board; I didn't see it in the docs.
When building the models, if running on Proxmox (or other kvm-based vm), the processor type has to be set to "host", rather than the default "generic kvm" type, because otherwise it doesn't enable the avx instruction set. The model script will complain about this, but it's easy to miss. Spent a few hours down that rabbit hole, but it's an easy fix (assuming you're running a processor that supports it to begin with!)
If using a Maxwell or Pascal card (at least the Quadro M4000 and Tesla P4), you need to use the "-e USE_FP16=False" flag on the tensor model build script, because they don't support FP16. Again, a rabbit hole, because the script won't fail, but the resulting models won't work; same for the avx issue - the build will complete but not be usable.
If using a Tesla P4, just building the models will overheat the card if it doesn't have airflow over it. I've got min in an HP DL360 G9, and didn't put the lid on, so it had no airflow from the chassis fans, and overheated at 93C. Currently running happily at 64C. It is however about twice as fast as the Maxwell.
Just a few points that hopefully might help someone installing for the first time in the future.
And a reiteration - THANK YOU for developing this, it is fantastic in comparison with others.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Just wanted to give a shout out to the developers, Frigate is EXCELLENT!!
I started on Tuesday to install this, having spent literally a couple months trying to get other NVRs to do what I wanted. Today's Thursday, and I've got it set up with the bells and whistles I wanted. A VAST improvement over the alternatives.
By implementing an algorithm that "chases" an object and refines the detection as it does so, Frigate is architecturally FAR superior to other systems. Also it's Linux, which is what a production server should be (if not UNIX).
I have it installed in an Ubuntu 22.04 VM on top of Proxmox 7.4, with Tesla P4 GPU passed through. With yolov7x-320.trt model, inference time is about 20ms. I could probably get that down with a smaller model, but if my math is right, I'm good for 10 cameras at 5FPS. I think that's enough. Only have 4 configured, and three more to install.
Learning points:
Waste of time buying cameras with built in AI - nowhere near as flexible as Tensorrt (or Coral, I should imagine, but not tried). The Amcrest IP8M-2669-AIs I have were a few bucks more expensive than the non-AI version, and this was wasted money now.
To use Tensorrt, the docker image has to have the -tenssorrt tag and is much larger than non-tensorrt. I discovered this through Google hits on this board; I didn't see it in the docs.
When building the models, if running on Proxmox (or other kvm-based vm), the processor type has to be set to "host", rather than the default "generic kvm" type, because otherwise it doesn't enable the avx instruction set. The model script will complain about this, but it's easy to miss. Spent a few hours down that rabbit hole, but it's an easy fix (assuming you're running a processor that supports it to begin with!)
If using a Maxwell or Pascal card (at least the Quadro M4000 and Tesla P4), you need to use the "-e USE_FP16=False" flag on the tensor model build script, because they don't support FP16. Again, a rabbit hole, because the script won't fail, but the resulting models won't work; same for the avx issue - the build will complete but not be usable.
If using a Tesla P4, just building the models will overheat the card if it doesn't have airflow over it. I've got min in an HP DL360 G9, and didn't put the lid on, so it had no airflow from the chassis fans, and overheated at 93C. Currently running happily at 64C. It is however about twice as fast as the Maxwell.
Just a few points that hopefully might help someone installing for the first time in the future.
And a reiteration - THANK YOU for developing this, it is fantastic in comparison with others.
Beta Was this translation helpful? Give feedback.
All reactions