Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Summary/recap idea #54

Open
blakeblackshear opened this issue Aug 2, 2019 · 16 comments
Open

Summary/recap idea #54

blakeblackshear opened this issue Aug 2, 2019 · 16 comments
Labels
enhancement New feature or request pinned

Comments

@blakeblackshear
Copy link
Owner

Something like this would be incredible. https://v.redd.it/flfjtitwfyd31

@steveneighbour
Copy link

That would be amazing, never seen that concept before.

@scstraus
Copy link

scstraus commented Aug 6, 2019

Having a daily recap like that running on a port like we have the realtime detection would be amazing, you could just have a little recap going on your dashboard when you wanted to check it and see everything for the last day.

@blakeblackshear
Copy link
Owner Author

https://towardsdatascience.com/build-a-motion-heatmap-videousing-opencv-with-python-fd806e8a2340

@hawkeye217
Copy link
Collaborator

hawkeye217 commented Dec 21, 2020

Another thing to throw in the mix that might be cool to have is something that ZMNinja (https://pliablepixels.github.io/index.html), an app for Zoneminder, has: a summary timeline of detected events.

Watch this video starting at 1:15:

https://youtu.be/prtA_mv68Ok?t=75

I bet it wouldn't be too hard to set something like that up in the Home Assistant component.

@blakeblackshear
Copy link
Owner Author

Yea. Something similar to that could be the future of the panel in homeassistant. I already have all the event data in the database I would need.

@blakeblackshear
Copy link
Owner Author

It could be useful to overlay information from other homeassistant entities too. Motion sensors, etc.

@hawkeye217
Copy link
Collaborator

Oh yeah! Great idea.

@stale
Copy link

stale bot commented Feb 22, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Feb 22, 2021
@stale stale bot closed this as completed Feb 25, 2021
@stale stale bot removed the stale label Feb 25, 2021
@stale
Copy link

stale bot commented Mar 27, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Mar 27, 2021
@stale stale bot closed this as completed Mar 30, 2021
@stale stale bot removed the stale label Mar 30, 2021
@gpete
Copy link
Contributor

gpete commented Jun 25, 2021

I think I may have found a way to do this with background subtraction. I've created an idea discussion about it #1288 but I'll elaborate on this specific use. A background image is initialized from the parts of the image that don't change (are under a given threshold). Subsequent images are compared to this background mask and motion is extracted as separate foreground frames where pixels under the threshold are transparent. When creating the composite you use the background as a starting point and then superimpose the additional motion images only where the pixels were considered foreground.

@asantaga
Copy link

hey all, newcomer to frigate here (from zm)

Has this ER been resolved? The one feature Im missing from frigate , (vs zoneminder/zmninja) is a UI can that see a clip and highlight where there was motion (not object detection) in the clip. Currently the only way I can think of doing it is to look at the motion sensor for the camera in HA, , note were the motion was (time) and then navigate to media and look at the media at the right place.. bit disjointed.

Am i missing something?

@NickM-27
Copy link
Sponsor Collaborator

hey all, newcomer to frigate here (from zm)

Has this ER been resolved? The one feature Im missing from frigate , (vs zoneminder/zmninja) is a UI can that see a clip and highlight where there was motion (not object detection) in the clip. Currently the only way I can think of doing it is to look at the motion sensor for the camera in HA, , note were the motion was (time) and then navigate to media and look at the media at the right place.. bit disjointed.

Am i missing something?

It depends on your recording modes. If you have motion set as the 24/7 recording mode retention then all the recordings will only be those that have motion.

There's a separate option for clips that if you set to motion would work similarly.

There's isn't a way to view motion boxes but I'm not sure why that matters since frigate by default only saves parts of clips that have an active object (motion of a detected object)

@poldim
Copy link

poldim commented Jun 4, 2022

Pasting from #3174 as it's a duplicate

This tech comes up on Reddit every now and then showing a video of many motion events on a single camera feed with the time that they occurred hovering over the action: https://reddit.com/r/interestingasfuck/comments/ufavbx/security_camera_superimposes_all_the_footage_from/
This user appears to have coded something in python to do it: https://github.com/Askill/Video-Summary

Would be very nice if you could have a daytime and a nighttime summary or gif auto-generated for a camera feed. This way you could have an HA tab to see yesterday's daytime and nighttime summaries.

luoj1 pushed a commit to luoj1/frigate that referenced this issue Apr 29, 2023
@savikko
Copy link

savikko commented May 22, 2023

Might be related or not, just implemented poor man's version of daily summary:
https://gist.github.com/savikko/eb765bf99841f59c4def45e2d00272d3

So, that creates 1000x or so video of daily events and then sends via HA notification.

@NickM-27 NickM-27 added the enhancement New feature or request label Aug 18, 2023
@AskAlice
Copy link

AskAlice commented Nov 13, 2023

I wrote this in object_processing but realized frigate utilizes a lot of multithreading. Perhaps someone more familiar with this codebase could expand it out into a multithreaded job? Then it would be very easy to add boxes & labels to objects

it's based on this answer on stackoverflow, and a handful of repos on github use some derivative of this answer
https://stackoverflow.com/questions/56429091/combining-features-of-all-videos-using-python

untested

click to view
def get_todays_recordings(self,camera):
    # get recordings for today
    today = datetime.date.today()
    tomorrow = today + datetime.timedelta(days=1)
    return Recordings.select().where(
        (Recordings.camera == camera)
        & (Recordings.object_name is not None)
        & (Recordings.start_time >= today)
        & (Recordings.start_time < tomorrow)
    )
def create_daily_summary(self,camera):
    #uses opencv to combine frames into a single video
    kernel_clean = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
    kernel_fill = np.ones((20,20),np.uint8)
    fgbg = cv2.createBackgroundSubtractorMOG2() 
    # sort recordings
    recordings = get_todays_recordings(camera)
    # get first frame
    first_frame = cv2.cvtColor(
        self.frame_manager.get(
            f"{camera}{recordings[0].start_time.timestamp()}",
            self.config.cameras[camera].frame_shape_yuv,
        ),
        cv2.COLOR_YUV2BGR_I420,
    )
    last_frame = cv2.cvtColor(
        self.frame_manager.get(
            f"{camera}{recordings[-1].end_time.timestamp()}",
            self.config.cameras[camera].frame_shape_yuv,
        ),
        cv2.COLOR_YUV2BGR_I420,
    )
    height, width, _ = first_frame.shape
    fps = self.config.cameras[camera].fps
    output_size = (width, height)
    fourcc = cv2.VideoWriter_fourcc(*"mp4v")
    video_writer = cv2.VideoWriter(
        f"{camera}-{recordings[0].start_time.date()}.mp4",
        fourcc,
        fps,
        output_size,
    )
    video_writer.write(first_frame)
    # iterate through each recording
    frame_b = first_frame.copy()
    for recording in recordings:
        # get the start and end times for the recording
        start_time = recording.start_time.timestamp()
        end_time = recording.end_time.timestamp()
        # get the start and end frames for the recording
        foreground = cv2.cvtColor(
            self.frame_manager.get(f"{camera}{start_time}", output_size),
            cv2.COLOR_YUV2BGR_I420,
        )
        end_frame = cv2.cvtColor(
            self.frame_manager.get(f"{camera}{end_time}", output_size),
            cv2.COLOR_YUV2BGR_I420,
        )
        # uses cv2.bitwise_and(...) on the foreground and bg, a bitwise_not on the frame mask, and  bitwise and on frame b, then it merges frame b with the foreground
        for frame in self.frame_manager.get_range(
            f"{camera}{start_time}", f"{camera}{end_time}"
        ):
            kernel_clean = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
            kernel_fill = np.ones((20,20),np.uint8)

            # get foreground objects from new frame
            frame_mask = fgbg.apply(frame)
            # clean noise
            frame_mask = cv2.morphologyEx(frame_mask, cv2.MORPH_OPEN, kernel_clean)
            # fill up foreground mask better
            frame_mask = cv2.morphologyEx(frame_mask, cv2.MORPH_CLOSE, kernel_fill)
            # remove grey areas, or set detectShadows=False in the extractor, which I learned later. However, removing shadows sometimes causes gaps in the primary foreground object. I found this to produce better results.
            indices = frame_mask > 100
            frame_mask[indices] = 255
            # get only foreground images from the new frame
            foreground_a = cv2.bitwise_and(frame,frame, mask=frame_mask)
            # clear out parts on blended frames where forground will be added
            frame_mask_inv = cv2.bitwise_not(frame_mask)
            modified_frame_b = cv2.bitwise_and(frame_b, frame_b, mask=frame_mask_inv)
            merged_frame = cv2.add(modified_frame_b, foreground_a)
            video_writer.write(merged_frame) 

@mabed-fr
Copy link

Is the basic demonstration (reddit) still possible manually or automatically on a daily basis on a next frigate release
?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request pinned
Projects
None yet
Development

No branches or pull requests