Replies: 5 comments 14 replies
-
There are 3 main sources of resource usage in Frigate, in no particular order: object detection, decoding video, motion detection object detectionFrigate supports a number of different object detection accelerators from Google Coral to Intel & Nvidia iGPU / GPUs Frigate runs motion detection and once motion is detected, Frigate will run object detection on the motion area(s) (referred to as regions). The object detector has a model which expects a specific size image (for example the default coral model expects 320x320 images). For this reason, the resolution of the camera does not affect the So, if you have an inference speed of 10 milliseconds then that means you can run (1000 ms in a second / 10 milliseconds per inference) 100 object detection inferences per second. Frigate often runs multiple inferences on a single camera frame, for instance when motion happens in two places at the same time, when an object was partially detected at the edge of a region and a larger region should be used, etc. The default fps for object detection is 5, this is generally good for most cases but there may be some cameras that need closer to 10 if there are frequent fast moving objects. decoding videoFrigate must decode the cameras video stream in order to capture each frame for motion + object detection. The docs highly recommend running detection on a sub stream of the camera given that regions are resized and there is in most cases no benefit to running object detection on higher resolutions. In my personal opinion 720 x 1280 is a good starting point, some cameras may need to use a bit higher if you are trying to detect small objects on that camera and it is not working consistently. Using ffmpeg, frigate supports recent AMD, Intel iGPU, and AMD iGPU / GPU. Those GPUs will have different specs that show how many concurrent stream decodes it supports as well as at what resolution. motion detectionFrigate runs motion detection to intelligently know when and where to run object detection. Motion detection is only run on the CPU and is directly affected by the resolution of the detect stream, which is another reason that it is recommended to run a sub stream for detect. |
Beta Was this translation helpful? Give feedback.
-
@JShimane As @NickM-27 explains and your last question indicates, the answer depends not only on the number of cameras you intend to use, but also their resolution, fps, and how long you want to save the recordings. I have 8 cameras. For detection, I use their substreams at 704x480 at 5 fps. For recording, I use their main streams which are a combination of 2688x1520, 2048x1536, and 1920x1080 at 20 fps, 25 fps, and 30 fps. I am running Frigate on a mini pc with an Intel n100 cpu and 16 gb of ram, 2 coral USBs, and 10 TB hard drive. This pc also runs Home Assistant in a vm. When I check |
Beta Was this translation helpful? Give feedback.
-
I have 30 cameras of varying qualities/ages, encodings (h265 and h264), and styles (panaromic). I ended up having to upgrade a bit of hardware to keep up. I ended up buying a Dual Xeon 4208 server (Dell R440 refurb) and Frigate seems to use about 40GB of my 62GB of RAM. I had to add a video card since the Xeons didn't support hardware decoding of the video, and I had to return my first one as it didn't have enough RAM, so thats probably the biggest thing to think about: your gonna wanna store all the clips in RAM while they're being processed, and your gonna want a video card to decode the videos, so your going to want a TON of RAM and video RAM, that seems to be the most important spec. I ended up with a NVIDIA T1000 8GB. I'm not using much GPU, that seems to stay around 10%, but my server only could take a single slot, relatively small card, and that seemed to be my best value (I also didn't know exactly how much GPU power I would need, turns out not much) I did run my 30 cameras successfully on one Google Coral USB, but I had a dual edge TPU (the mini PCI Express one) on order for a while and it came in a few weeks ago (along with an adapter that supports dual edge TPUs). While the USB had no problem keeping up with my cameras, when I switched over to the dual edge TPU, I had DRASTIC drops in memory usage, as Frigate was able to pull the videos out of cache that much faster. My inference rate went from 20ms to about 8ms on average, which seems insignificant, but that seemed to lead a several GB drop in memory usage. Infact, I felt my 64GB of RAM wasn't enough and thought I couldn't put more cameras in prior to getting the dual edge TPU., but now I feel I can definitely squeeze some more in. I've been reporting my issues with my larger then normal number of cameras to Nick, and he has been absolutely amazing at getting them fixed for me! But I will note, the fixes are all in the dev branch.... so your going to need to run that until 0.13 comes out. There are a bunch of multithreading improvements, DB performance improvements, etc in the latest dev builds that are, IMO, critical to an implementation you describe. For storage, I'm currently keeping 4TB of video. I keep my database on a separate hard drive from the recordings. In fact, I'd recommend keeping the recordings on their own drive. I seem to be currently producing about 35GB/hour of recordings, but I've been meaning to try to adjust motion sensitivity on some cameras as they end up recording 24/7. Well, and my 2 LPR cameras that I just got recently are also set to record 24/7 If I was doing this again, I'd probably choose a different server that can handle multiple video cards. If you have any other questions, let me know |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Also make sure your cache is really on tmpfs, until a recording is saved you really need it stored in RAM. Portainer and other such "docker supervisors" can cause the tmpfs not to be stored in RAM. I experienced that at a previous point before i realized what was going on. ffmpeg should never be writing to anything but RAM from my understanding. |
Beta Was this translation helpful? Give feedback.
-
Hello! I am looking to use Frigate for around 60 cameras, with detection. I'm hoping to figure out what kind of hardware that would require. It's preferred that I use a single central server.
Is there any sort of rule/suggestions for hardware per camera?
i.e., TPU/cameras, CPU/cameras, GPU/cameras
Or is there any way to figure that out based on FPS and resolution per camera?
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions