-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Summary/recap idea #54
Comments
That would be amazing, never seen that concept before. |
Having a daily recap like that running on a port like we have the realtime detection would be amazing, you could just have a little recap going on your dashboard when you wanted to check it and see everything for the last day. |
Another thing to throw in the mix that might be cool to have is something that ZMNinja (https://pliablepixels.github.io/index.html), an app for Zoneminder, has: a summary timeline of detected events. Watch this video starting at 1:15: https://youtu.be/prtA_mv68Ok?t=75 I bet it wouldn't be too hard to set something like that up in the Home Assistant component. |
Yea. Something similar to that could be the future of the panel in homeassistant. I already have all the event data in the database I would need. |
It could be useful to overlay information from other homeassistant entities too. Motion sensors, etc. |
Oh yeah! Great idea. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I think I may have found a way to do this with background subtraction. I've created an idea discussion about it #1288 but I'll elaborate on this specific use. A background image is initialized from the parts of the image that don't change (are under a given threshold). Subsequent images are compared to this background mask and motion is extracted as separate foreground frames where pixels under the threshold are transparent. When creating the composite you use the background as a starting point and then superimpose the additional motion images only where the pixels were considered foreground. |
hey all, newcomer to frigate here (from zm) Has this ER been resolved? The one feature Im missing from frigate , (vs zoneminder/zmninja) is a UI can that see a clip and highlight where there was motion (not object detection) in the clip. Currently the only way I can think of doing it is to look at the motion sensor for the camera in HA, , note were the motion was (time) and then navigate to media and look at the media at the right place.. bit disjointed. Am i missing something? |
It depends on your recording modes. If you have motion set as the 24/7 recording mode retention then all the recordings will only be those that have motion. There's a separate option for clips that if you set to motion would work similarly. There's isn't a way to view motion boxes but I'm not sure why that matters since frigate by default only saves parts of clips that have an active object (motion of a detected object) |
Pasting from #3174 as it's a duplicate This tech comes up on Reddit every now and then showing a video of many motion events on a single camera feed with the time that they occurred hovering over the action: https://reddit.com/r/interestingasfuck/comments/ufavbx/security_camera_superimposes_all_the_footage_from/ Would be very nice if you could have a daytime and a nighttime summary or gif auto-generated for a camera feed. This way you could have an HA tab to see yesterday's daytime and nighttime summaries. |
Combining Parser + Planner
Might be related or not, just implemented poor man's version of daily summary: So, that creates 1000x or so video of daily events and then sends via HA notification. |
I wrote this in object_processing but realized frigate utilizes a lot of multithreading. Perhaps someone more familiar with this codebase could expand it out into a multithreaded job? Then it would be very easy to add boxes & labels to objects it's based on this answer on stackoverflow, and a handful of repos on github use some derivative of this answer untested click to viewdef get_todays_recordings(self,camera):
# get recordings for today
today = datetime.date.today()
tomorrow = today + datetime.timedelta(days=1)
return Recordings.select().where(
(Recordings.camera == camera)
& (Recordings.object_name is not None)
& (Recordings.start_time >= today)
& (Recordings.start_time < tomorrow)
)
def create_daily_summary(self,camera):
#uses opencv to combine frames into a single video
kernel_clean = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
kernel_fill = np.ones((20,20),np.uint8)
fgbg = cv2.createBackgroundSubtractorMOG2()
# sort recordings
recordings = get_todays_recordings(camera)
# get first frame
first_frame = cv2.cvtColor(
self.frame_manager.get(
f"{camera}{recordings[0].start_time.timestamp()}",
self.config.cameras[camera].frame_shape_yuv,
),
cv2.COLOR_YUV2BGR_I420,
)
last_frame = cv2.cvtColor(
self.frame_manager.get(
f"{camera}{recordings[-1].end_time.timestamp()}",
self.config.cameras[camera].frame_shape_yuv,
),
cv2.COLOR_YUV2BGR_I420,
)
height, width, _ = first_frame.shape
fps = self.config.cameras[camera].fps
output_size = (width, height)
fourcc = cv2.VideoWriter_fourcc(*"mp4v")
video_writer = cv2.VideoWriter(
f"{camera}-{recordings[0].start_time.date()}.mp4",
fourcc,
fps,
output_size,
)
video_writer.write(first_frame)
# iterate through each recording
frame_b = first_frame.copy()
for recording in recordings:
# get the start and end times for the recording
start_time = recording.start_time.timestamp()
end_time = recording.end_time.timestamp()
# get the start and end frames for the recording
foreground = cv2.cvtColor(
self.frame_manager.get(f"{camera}{start_time}", output_size),
cv2.COLOR_YUV2BGR_I420,
)
end_frame = cv2.cvtColor(
self.frame_manager.get(f"{camera}{end_time}", output_size),
cv2.COLOR_YUV2BGR_I420,
)
# uses cv2.bitwise_and(...) on the foreground and bg, a bitwise_not on the frame mask, and bitwise and on frame b, then it merges frame b with the foreground
for frame in self.frame_manager.get_range(
f"{camera}{start_time}", f"{camera}{end_time}"
):
kernel_clean = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
kernel_fill = np.ones((20,20),np.uint8)
# get foreground objects from new frame
frame_mask = fgbg.apply(frame)
# clean noise
frame_mask = cv2.morphologyEx(frame_mask, cv2.MORPH_OPEN, kernel_clean)
# fill up foreground mask better
frame_mask = cv2.morphologyEx(frame_mask, cv2.MORPH_CLOSE, kernel_fill)
# remove grey areas, or set detectShadows=False in the extractor, which I learned later. However, removing shadows sometimes causes gaps in the primary foreground object. I found this to produce better results.
indices = frame_mask > 100
frame_mask[indices] = 255
# get only foreground images from the new frame
foreground_a = cv2.bitwise_and(frame,frame, mask=frame_mask)
# clear out parts on blended frames where forground will be added
frame_mask_inv = cv2.bitwise_not(frame_mask)
modified_frame_b = cv2.bitwise_and(frame_b, frame_b, mask=frame_mask_inv)
merged_frame = cv2.add(modified_frame_b, foreground_a)
video_writer.write(merged_frame) |
Is the basic demonstration (reddit) still possible manually or automatically on a daily basis on a next frigate release |
Something like this would be incredible. https://v.redd.it/flfjtitwfyd31
The text was updated successfully, but these errors were encountered: