You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use almost everything possible to avoid false positive: min_area, max_area, masks, custom threshold (no less than default 0.7), and I'm also going to use min_ratio, max_ratio for edge cases, but I would love a minimum duration, or a minimum number of frames in a zone to avoid false positives.
From time to time, because of an bird, an insect, or a shadow with car lights or sun, I get a false positive for person detection, or because there is a person in an adjacent zone and there's of a short box extension (for just 1 frame), the bottom point used for the detection enters the zone and triggers my alarm automation.
I use -stream_loop on the clips to debug them and I see the person box in the zone for less than a second / or a single frame over the detection threshold. I know from the docs than threshold is based on the median of the history of scores (padded to 3 values) so it's often not because of a "single" frame.
Similarly to what's done for "Stationary Objects", I'd love the ability to also also a minimum number of frame to consider an object to be detected in a zone.
This is the kind of case I have.
Despite the min_ratio / max_ratio, when the subject entered the field of view, for a single frame, the object detection was in the yellow zone.
When the detection threshold was above 0.7 a few frames after, the ratio was fine but the object was already tracked in the yellow zone.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I use almost everything possible to avoid false positive: min_area, max_area, masks, custom threshold (no less than default 0.7), and I'm also going to use min_ratio, max_ratio for edge cases, but I would love a minimum duration, or a minimum number of frames in a zone to avoid false positives.
From time to time, because of an bird, an insect, or a shadow with car lights or sun, I get a false positive for person detection, or because there is a person in an adjacent zone and there's of a short box extension (for just 1 frame), the bottom point used for the detection enters the zone and triggers my alarm automation.
I use
-stream_loop
on the clips to debug them and I see the person box in the zone for less than a second / or a single frame over the detection threshold. I know from the docs thanthreshold is based on the median of the history of scores (padded to 3 values)
so it's often not because of a "single" frame.Similarly to what's done for "Stationary Objects", I'd love the ability to also also a minimum number of frame to consider an object to be detected in a zone.
https://docs.frigate.video/configuration/stationary_objects
I only found a mention of this in this topic:
#3583 (comment)
Beta Was this translation helpful? Give feedback.
All reactions