Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.12.0 Release #4055

Merged
merged 295 commits into from
Apr 8, 2023
Merged

0.12.0 Release #4055

merged 295 commits into from
Apr 8, 2023

Conversation

blakeblackshear
Copy link
Owner

@blakeblackshear blakeblackshear commented Oct 9, 2022

For this release, the goals are to focus on stability, diagnostics, and device compatibility. No promises, but these are the guardrails.

Docs preview: https://deploy-preview-4055--frigate-docs.netlify.app/

Stability

  • Proper daylight savings time fix by switching to storing files with unix timestamps rather than YYYY-HH folders (client should specify timezone in API calls to avoid any assumptions about timezone on backend)
  • Ensure recording durations are a reasonable amount of time to avoid corruption
  • Incorporate go2rtc to get rid of common RTMP related issues and provide new live view options
  • Ensure Frigate doesn't fill up storage and crash
  • Hwaccel presets so defaults can be updated without requiring users to change config files
  • Attempt to remove VOD module with a direct m3u8 playlist builder
  • Make MQTT optional and more gracefully handle connection failures and reconnects
  • Detect if segments stop being written to tmp when record is enabled and restart the ffmpeg process responsible for record

Diagnostics/Troubleshooting

  • CPU/GPU stats per process per camera
  • Storage information (per camera too)
  • Logs in UI
  • Logs from nginx/ffmpeg/go2rtc etc more easily available
  • Simple pattern matching for secret scrubbing in logs, etc
  • Github action to scrub secrets in issues
  • Ability to execute ffprobe/vainfo etc in container and get output for diagnosis
  • Replace green screen with helpful placeholder message
  • Error out on duplicate keys in config file like config.yml vs debug config #4213
  • More helpful output from config validation errors for things like extra keys
  • Show yaml config in the UI to reduce posts with json config

Device support

  • TensorRT
  • OpenVINO

Known issues

  • BASE_PATH not being properly replaced in monaco editor
  • When watching MSE live view on android, if you scroll down past the video and then back up, playback is broken
  • iOS does not support MSE, instead of loading forever should show error message
  • Recordings playback when selecting an hour starts playback ~10 minutes before the hour (in America/Chicago, possibly others)

@netlify
Copy link

netlify bot commented Oct 9, 2022

Deploy Preview for frigate-docs ready!

Name Link
🔨 Latest commit 0e61ea7
🔍 Latest deploy log https://app.netlify.com/sites/frigate-docs/deploys/643161a6ad0c5d000821847b
😎 Deploy Preview https://deploy-preview-4055--frigate-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site settings.

@pdecat
Copy link
Contributor

pdecat commented Oct 9, 2022

Proper daylight savings time fix by switching to storing files with unix timestamps rather than YYYY-HH folders (client should specify timezone in API calls to avoid any assumptions about timezone on backend)

What about keeping YYYY-HH folders but use UTC instead to avoid DST issues?

@blakeblackshear
Copy link
Owner Author

What about keeping YYYY-HH folders but use UTC instead to avoid DST issues?

That could work too

@Cold-Lemonade
Copy link

I like Frigate's feature where the camera's stream fills the computer screen when you click on the image of the camera's live view. This feature would really be great if it would switch over to the camera's stream identified as "rtmp" in the config file when it does this. For example, I use the low res stream for "detect", but use the high res stream for "rtmp". I would like to see the high res stream when I click on the live image to fill the screen. Would this be possible given that you're planning to incorporate go2rtc?

@blakeblackshear
Copy link
Owner Author

I like Frigate's feature where the camera's stream fills the computer screen when you click on the image of the camera's live view. This feature would really be great if it would switch over to the camera's stream identified as "rtmp" in the config file when it does this. For example, I use the low res stream for "detect", but use the high res stream for "rtmp". I would like to see the high res stream when I click on the live image to fill the screen. Would this be possible given that you're planning to incorporate go2rtc?

Yes. The current live view requires decoding the video stream which would require lots of CPU. go2rtc would allow a direct passthrough of the video to the frontend so the higher resolution can be used.

@Cold-Lemonade
Copy link

Yes. The current live view requires decoding the video stream which would require lots of CPU. go2rtc would allow a direct passthrough of the video to the frontend so the higher resolution can be used.

Given that go2rtc allows for direct passthrough of the video to the frontend, could the "camera" tab in the Frigate UI show the low res live feeds instead of snapshots as they currently do?

@blakeblackshear
Copy link
Owner Author

Given that go2rtc allows for direct passthrough of the video to the frontend, could the "camera" tab in the Frigate UI show the low res live feeds instead of snapshots as they currently do?

That's technically already possible without go2rtc. There are existing feature requests for that.

@thinkloop
Copy link

  • Proper daylight savings time fix by switching to storing files with unix timestamps rather than YYYY-HH folders (client should specify timezone in API calls to avoid any assumptions about timezone on backend)

Would this means that all the files would be in one folder?

*What about keeping YYYY-HH folders but use UTC instead to avoid DST issues?

Personally I would much prefer this solution, unix timestamps are annoying to work with as a human.

@blakeblackshear
Copy link
Owner Author

Would this means that all the files would be in one folder?

Not necessarily.

Personally I would much prefer this solution, unix timestamps are annoying to work with as a human.

This will probably be just as easy, but why do you need to interact with the segment data? I think of them like internal binary blob storage for the database not intended to be used directly.

@thinkloop
Copy link

thinkloop commented Oct 13, 2022

but why do you need to interact with the segment data? I think of them like internal binary blob storage for the database not intended to be used directly.

That's always the dream, but in the end we still rely on our laptop fans to discover errant processes 😛. In my case I concatenated segments to make a timelapse that eneded up being exceptionally easy to accomplish given how nicely organized everything in frigate is.

@rsnodgrass
Copy link

rsnodgrass commented Oct 14, 2022

With go2rtc being integrated directly, does this mean we can stop using the go2rtc add-on within Home Assistant? What if Frigate is running on a host other than HA how would the WebRTC streams be exposed via HA when external to the network? Thanks for continually making Frigate better!!!

@NickM-27
Copy link
Sponsor Collaborator

With go2rtc being integrated directly, does this mean we can stop using the go2rtc add-on within Home Assistant? What if Frigate is running on a host other than HA how would the WebRTC streams be exposed via HA when external to the network? Thanks for continually making Frigate better!!!

It depends what all you're using go2rtc for. In this initial implementation not all go2rtc features will necessarily be available. There will also be some caveats with things like webRTC where you may need to run frigate as host

@rsnodgrass
Copy link

It depends what all you're using go2rtc for. In this initial implementation not all go2rtc features will necessarily be available. There will also be some caveats with things like webRTC where you may need to run frigate as host

Basically the bare minimum just to get fast streams viewable via Home Assistant without laginess on startup. Streaming the RTMP feeds directly through HA from separate Frigate host are slow. Planned to use go2rtc to provide WebRTC stream that was much faster via the Frigate lovelace card/integration.

@LordNex
Copy link

LordNex commented Oct 23, 2022

It depends what all you're using go2rtc for. In this initial implementation not all go2rtc features will necessarily be available. There will also be some caveats with things like webRTC where you may need to run frigate as host

Basically the bare minimum just to get fast streams viewable via Home Assistant without laginess on startup. Streaming the RTMP feeds directly through HA from separate Frigate host are slow. Planned to use go2rtc to provide WebRTC stream that was much faster via the Frigate lovelace card/integration.

To bad it doesn't work well with UI generators like Dwain's Dashboard. This last rebuild of HA I went with it just as an ease of use case setup. Boy I wish I wouldn't and had just shard coded the yaml. Given DD is nice and looks well. But very prone to bugs and breaks with updates, can't work with some outside plugins like WebRTC's card, and lack of any real follow through development that would allow it to work with WebRTC

I tried adding it and it broke the setup to where I could not remove it except by manually editing the file.

@LordNex
Copy link

LordNex commented Oct 23, 2022

Can't wait to see TensorRT. I would love to have Frigate on my 4gig Jetson Nano with the Coral TPU attached to be able to get the best out of the object detection while also having a GPU to encode and decode streams.

@NickM-27
Copy link
Sponsor Collaborator

Can't wait to see TensorRT. I would love to have Frigate on my 4gig Jetson Nano with the Coral TPU attached to be able to get the best out of the object detection while also having a GPU to encode and decode streams.

I'm confused, that setup wouldn't use TensorRT unless you mean using that and a coral.

What you're describing should be possible today.

@kdill00
Copy link

kdill00 commented Oct 24, 2022

Yeah you should already to be able to accomplish what you stated. Tensorrt is to use the GPU for detection. Jetsons have an nvdec chip in them. You should be able follow nvidia hwaccel docs to get where you want. If you have performance issues, apply the settings -threads 1 -surfaces 10 in this line: -c:v h264_cuvid -threads 1 -surfaces 10

this will limit the decoding hardware to the minimum required memory needed for a successful decode. (According to NVIDIA docs 8 surfaces is the minimum needed for a good decode so you can probably get away with less if needed, play with it). I don’t know if the one you have has DLA cores or what but if their are multiple GPUs displayed when you do nvidia-smi you need to add -gpus “corresponding GPU number”. So your hwaccel should look like -c:v h264_cuvid -gpus “1” -threads 1 -surfaces 10

-gpus setting not needed if only a single one or the one you want to use is GPU 0. If it doesn’t work let me know as their is another way to engage nvidia hw decoding with ffmpeg if this does not work. It’s just consumes more memory on the gpu and isn’t ideal unless all work is staying on the GPU which with frigate it current doesn’t.

The other setting nvidia explicitly recommends when decoding with ffmpeg is -vsync 0 before the hwaccel args to prevent NVDEC from accidentally duplicated frames when it shouldnt. I have not really seen much of a difference either way with that setting but it is stated it should always be used when decoding if possible.

@NickM-27
Copy link
Sponsor Collaborator

@user897943 all of these are potential future improvements, but as the release goals at the top show: the focus for 0.12 is to have frigate be more stable in this case meaning not crash and continue to record even when storage is almost full.

@NickM-27
Copy link
Sponsor Collaborator

Awesome but what strategy will be used to manage?

As was implemented in #3942 if frigate detects that there is not enough space for 1 hour of recordings then frigate will delete recordings from oldest to newest until there is a total space for 2 hours of recording and continue this cycle. If a user fills their storage with unrelated files and frigate has no more recordings to delete then it will not crash on being unable to move recording to the recordings drive.

@felalex
Copy link

felalex commented Nov 2, 2022

Hi everyone,

I see that it's planned for the next release to support GPU inferencing with TensorRT.
I've been wondering regarding whether it's also planned to support using both GPU (TensorRT) and Coral for inferencing at the same time.
Something like :
detectors:
coral:
type: edgetpu
device: usb
cuda:
type: tensorrt

If so, then it will probably require different models, per detector type (I presume no one will want a different model for different instances of the same detector type).

So then the config will probably have to look like:

detectors:
coral:
type: edgetpu
device: usb
model: < optional model config>
cuda:
type: tensorrt
model:

Is that something being considered?

@blakeblackshear
Copy link
Owner Author

I've been wondering regarding whether it's also planned to support using both GPU (TensorRT) and Coral for inferencing at the same time

Yes. A mixed set of detectors is already supported.

@NateMeyer
Copy link
Contributor

I've been wondering regarding whether it's also planned to support using both GPU (TensorRT) and Coral for inferencing at the same time

Yes. A mixed set of detectors is already supported.

For a mixed set of detectors, I think the model configuration will be detector-framework-specific. Do we need to tweak the model config to account for this?

@blakeblackshear
Copy link
Owner Author

For a mixed set of detectors, I think the model configuration will be detector-framework-specific. Do we need to tweak the model config to account for this?

I think that will be necessary, yea. Perhaps the model config should now be nested under the detector.

@LordNex
Copy link

LordNex commented Nov 13, 2022

Yeah you should already to be able to accomplish what you stated. Tensorrt is to use the GPU for detection. Jetsons have an nvdec chip in them. You should be able follow nvidia hwaccel docs to get where you want. If you have performance issues, apply the settings -threads 1 -surfaces 10 in this line: -c:v h264_cuvid -threads 1 -surfaces 10

this will limit the decoding hardware to the minimum required memory needed for a successful decode. (According to NVIDIA docs 8 surfaces is the minimum needed for a good decode so you can probably get away with less if needed, play with it). I don’t know if the one you have has DLA cores or what but if their are multiple GPUs displayed when you do nvidia-smi you need to add -gpus “corresponding GPU number”. So your hwaccel should look like -c:v h264_cuvid -gpus “1” -threads 1 -surfaces 10

-gpus setting not needed if only a single one or the one you want to use is GPU 0. If it doesn’t work let me know as their is another way to engage nvidia hw decoding with ffmpeg if this does not work. It’s just consumes more memory on the gpu and isn’t ideal unless all work is staying on the GPU which with frigate it current doesn’t.

The other setting nvidia explicitly recommends when decoding with ffmpeg is -vsync 0 before the hwaccel args to prevent NVDEC from accidentally duplicated frames when it shouldnt. I have not really seen much of a difference either way with that setting but it is stated it should always be used when decoding if possible.

Thank you for such a detailed explanation. I'm going to give it a try here soon( have to disassemble the case to get to the SD Card ). I'll start from scratch with the Jetson Nano SDK Image and try and add from there. Mine is the 4 gig Nano that was out prior to the big chip shortage and I know it has a buttload of CUDA cores. But I'm not sure about DLA cores, I'll have to check. But I don't remember it showing multiple GPUs when running jtop.

Currently I've been having pretty good success just utilizing my 40 core PowerEdge R620 and the TPU attached to a passed through usb adapter. I'm then leveraging DoubleTake and CompreFace on my Home Assistant install for recognition and final verifications. So all I'm looking for Frigate to do is the heavy NVR load, RTMP, go2rtc, or decoding/encoding of stream feeds to Home Assistant and devices, and then utilize the TPU for recognition of Person, Face, or Car and send those to DoubleTake via MQTT for processing.

Ultimately I would love to see added ability probably from DoubleTake to take URLs and "scrap" images with the corresponding information and present that as well. Places such as the Sex Offenders List, Departmental of Corrections, and Social Media should allow us to be able to detect who is at our door and as much information as possible before we open the door. I know networking and IT like the back of my hand but I wouldn't consider myself a programmer by any means. Although I'm trying to learn.

@bagobones
Copy link

Personally I would much prefer this solution, unix timestamps are annoying to work with as a human.

This will probably be just as easy, but why do you need to interact with the segment data? I think of them like internal binary blob storage for the database not intended to be used directly.

As per some of my feature requests I GREATLY prefer self documented files that don't depend on the Application OR the DB to work to function.. Having a logical / human readable / searchable file system means I could backup a days worth of videos without needing Frigate to view them or know when they were made or if the DB corrupts or something I still can go back through historic videos with file system search and a video player.

@invisible999
Copy link

Thank you for such a detailed explanation. I'm going to give it a try here soon( have to disassemble the case to get to the SD Card ). I'll start from scratch with the Jetson Nano SDK Image and try and add from there. Mine is the 4 gig Nano that was out prior to the big chip shortage and I know it has a buttload of CUDA cores. But I'm not sure about DLA cores, I'll have to check. But I don't remember it showing multiple GPUs when running jtop.

Replied to you in another thread - please share the outcome. I also have Nano 4GB sitting waiting for good use and GPU-accelerated object detection with HA would be the perfect use of it.

* Add ffprobe endpoint

* Get ffprobe for multiple inputs

* Copy ffprobe in output

* Fix bad if statement

* Return full output of ffprobe process

* Return full output of ffprobe process

* Make ffprobe button show dialog with output and option to copy

* Add driver names to consts

* Add driver env var name

* Setup general tracking for GPU stats

* Catch RPi args as well

* Add util to get radeontop results

* Add real amd GPU stats

* Fix missed arg

* pass config

* Use only the values

* Fix vram

* Add nvidia gpu stats

* Use nvidia stats

* Add chart for gpu stats

* Format AMD with space between percent

* Get correct nvidia %

* Start to add support for intel GPU stats

* Block out RPi as util is not currently available

* Formatting

* Fix mypy

* Strip for float conversion

* Strip for float conversion

* Fix percent formatting

* Remove name from gpu map

* Add tests and fix AMD formatting

* Add nvidia gpu stats test

* Formatting

* Add intel_gpu_top for testing

* Formatting

* Handle case where hwaccel is not setup

* Formatting

* Check to remove none

* Don't use set

* Cleanup and fix types

* Handle case where args is list

* Fix mypy

* Cast to str

* Fix type checking

* Return none instead of empty

* Fix organization

* Make keys consistent

* Make gpu match style

* Get support for vainfo

* Add vainfo endpoint

* Set vainfo output in error correctly

* Remove duplicate function

* Fix errors

* Do cpu & gpu work asynchonously

* Fix async

* Fix event loop

* Fix crash

* Fix naming

* Send empty data for gpu if error occurs

* Show error if gpu stats could not be retrieved

* Fix mypy

* Fix test

* Don't use json for vainfo

* Fix cross references

* Strip unicode still

* await vainfo response

* Add gpu deps

* Formatting

* remove comments

* Use empty string

* Add vainfo back in
mweinelt and others added 27 commits March 3, 2023 17:44
It supports the same entrypoints, given that tflite is a small cut-out
of the big tensorflow picture.

This patch was created for downstream usage in nixpkgs, where we don't
have the tflite python package, but do have the full tensorflow package.
* Set end time for download event

* Set the value
* docs: adds note about dynamic config

* less technical verbiage

* removes `dynamic configuration` verbiage

* list all replaceable values
I believe that we should use defined rtsp_cam_sub, not test_cam_sub
I believe that it should be RTSP there
* Fix timezone issues with strftime

* Fix timezone adjustment

* Fix bug
* Make note that snapshots are required for Frigate+

* Fix spacing
* Fixed extension of config file

Using frigate.yml as the config file for the HA addon gives a validation error, the same contents in frigate.yaml work.

* More accurate description of config file handling.

* Update docs/docs/configuration/index.md

Co-authored-by: Nicolas Mowen <[email protected]>

---------

Co-authored-by: Nicolas Mowen <[email protected]>
* Point to specific tag of go2rtc docs

* Point to go2rtc 1.2.0 docs

* Point to go2rtc 1.2.0 docs

* Update camera_specific.md
* Comment out timezone as it should not be set with None if copied

* Use "" for ffmpeg: so it does not appear as comment

* Add example to timezone setting
I always forget that for the logs to appear there, they should not be sent to stderr but stdout.
* Update Unifi specific configuration

Provided more specific detail on what modifications are required to the Unifi camera rtsps links: change to rtspx to remove authentication and remove the ?enableSrtp to function on TCP. Provided a sample configuration for a Unifi camera.

* Update docs/docs/configuration/camera_specific.md

Co-authored-by: Nicolas Mowen <[email protected]>

* Update docs/docs/configuration/camera_specific.md

Co-authored-by: Nicolas Mowen <[email protected]>

---------

Co-authored-by: Nicolas Mowen <[email protected]>
@blakeblackshear blakeblackshear marked this pull request as ready for review April 8, 2023 12:45
@blakeblackshear blakeblackshear merged commit da3e197 into master Apr 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet