Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FR] Face recognition #260

Open
corgan2222 opened this issue Oct 22, 2020 · 52 comments
Open

[FR] Face recognition #260

corgan2222 opened this issue Oct 22, 2020 · 52 comments
Labels
enhancement New feature or request pinned

Comments

@corgan2222
Copy link

Is there any change to implement face recognition?
Atm i have shinoby running for that, but it really outdatet and lack of features.
Having face recognition build into frigate would be awesome!

thanks

https://missinglink.ai/guides/tensorflow/tensorflow-face-recognition-three-quick-tutorials/
https://github.com/search?q=Face+Recognition+Tensorflow

@blakeblackshear
Copy link
Owner

It's possible, but probably not going to be a focus in the near future.

@mattheys
Copy link
Contributor

How about sending the detected person image from MQTT to a second project like https://github.com/JanLoebel/face_recognition using something like NodeRed to glue it all together? Only suggest it because I'm going to look at doing the same thing. Haven't evaluated that project yet so there may be better ones out there.

@corgan2222
Copy link
Author

This may work. But I'm using the face recognition to open our front door. Even with let handling Shinobi this directly, the system need sometime several seconds. I'm afraid, the adding one more step, the daily gets worse.

Actually I want to have only one tool for all the work (motion, people, faces), that's why I asked in the first place. ;)

@Buzztiger
Copy link

So frigate already accepts custom models and there are several tflite ones for facial recognition. IMHO If you are able to cross-train a model with your faces this should already work with the current code. Instead of person or car you would have the persons you trained as object label designations. As I assume that you want to open the door not to any human face some training would need to be done in any case which would, given the limitations of e.g. the coral detector, be needed to be carried out on another preferably GPU powered platform anyway.

@corgan2222
Copy link
Author

I have never trained my own custom model, but I like the idea. Maybe i give that a try.
Any tips where to start?

The nice thing about Shinobi is, that YOLO, face-rec, and FFmpeg running on GPU inside an unraid docker container. But I also have a Google coral.

@Buzztiger
Copy link

So a good starting point would be actually the coral.ai homepage itself. Have a look at https://coral.ai/examples/. There's also examples for cross-training for object recognition to get everything set up and familiarise yourself with the topic.

For the face recognition part I had some success with with this tutorial, which is for Tensorflow (GPU/CPU) and would need to be converted to be able to run on the Coral (TFlite format). It's been a while since I looked into this, but seems like people got mobilefacenet to run on the coral so it's possible.

@corgan2222
Copy link
Author

Thanks a lot! Will give this a try on the weekend.

@ekos2001
Copy link

ekos2001 commented Feb 4, 2021

@corgan2222 Did you have a chance to try to train your own model for face recognition? Can you share how to do it?

@RealKanashii
Copy link

Also interested. But I never really tried to train a net, probably 'cause lack of time. :(

@corgan2222
Copy link
Author

@corgan2222 Did you have a chance to try to train your own model for face recognition? Can you share how to do it?

No, but I didn't tried after looked into it. :(

@corgan2222
Copy link
Author

Still very interested in face recognition. Is there any chance on getting this kind of feature?

@blakeblackshear
Copy link
Owner

Definitely. I will be looking at this with custom models soon.

@corgan2222
Copy link
Author

Sounds great!
I have found a really nice tutorial which explains how to easily create a TFLite Edge Model.
https://teachablemachine.withgoogle.com

I now have a trained model, but I'm not sure about the existing models. Do i have to combine it with the existing one to keep detecting of persons/cars ect. ?

@blakeblackshear
Copy link
Owner

Yes

@corgan2222
Copy link
Author

corgan2222 commented Mar 3, 2021

I tried the last night, but I'm stuck and have no clue how to solve it.

What have I'm done so far:
Windows 10, wsl2

  • created a model with with 6 different images sets with 50-150 images
    on the website the model runs fine with a webcam

  • download the tensorflow lite edge tpu model

  • installed rtsp-simple-server

  • compiled ffmpeg

  • rendered some videos with 1fps on saved images for testing
    ffmpeg -framerate 1 -pattern_type glob -i '*.jpg' -c:v libx264 -r 30 front.mp4

  • send the video to the rtsp server with ffmpeg
    ffmpeg -re -stream_loop -1 -i .\front.mp4 -c copy -f rtsp rtsp:https://localhost:8559/mystream

  • installed frigate with docker compose on my windows machine and connect to the rtsp server
    with the default model everything runs fine, even on cpu

Screen

Then I changed the docker compose file to mount the new model and the labelmap
frigate comes up, showed some images and then crashed. No clue what the error Message means.

docker-compose

version: '3.9'
services:
  frigate:
    container_name: frigate
    privileged: true # this may not be necessary for all setups
    restart: unless-stopped
    image: blakeblackshear/frigate:stable-amd64
    devices:
      - /dev/bus/usb:/dev/bus/usb
      #- /dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./config/config.yml:/config/config.yml:ro
      - ./data:/media/frigate
      - ./model_edgetpu.tflite:/model_edgetpu.tflite
      - ./labelmap.txt:/labelmap.txt

      - type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
        target: /tmp/cache
        tmpfs:
          size: 1000000000
    ports:
      - '5000:5000'
      - '1935:1935' # RTMP feeds
    environment:
      FRIGATE_RTSP_PASSWORD: 'password'

Config

mqtt:
  host: 192.168.2.120
  topic_prefix: frigate
  user: broker
  client_id: frigate_test

detectors:
  coral:
    type: edgetpu
    device: usb

ffmpeg:
  # Optional: global ffmpeg args (default: shown below)
  global_args: -hide_banner -loglevel warning
  # Optional: global hwaccel args (default: shown below)
  # NOTE: See hardware acceleration docs for your specific device
  hwaccel_args: []
  # Optional: global input args (default: shown below)
  input_args: -avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1
  # Optional: global output args
  output_args:
    # Optional: output args for detect streams (default: shown below)
    detect: -f rawvideo -pix_fmt yuv420p
    # Optional: output args for record streams (default: shown below)
    record: -f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an
    # Optional: output args for clips streams (default: shown below)
    clips: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an
    # Optional: output args for rtmp streams (default: shown below)
    rtmp: -c copy -f flv

cameras:
  front:
    ffmpeg:
      inputs:
        #- path: rtsp:https://192.168.2.91:80/live/stream
        - path: rtsp:https://192.168.2.30:8559/mystream
          roles:
            - detect
            #- rtmp
    width: 640
    height: 480
    fps: 20
    #mask: poly,501,90,399,469,4,470,13,171,114,155,221,89,292,63,422,63,462,86

    # Optional: camera level motion config
    motion:
      # Optional: motion mask
      # NOTE: see docs for more detailed info on creating masks
      mask: 
        - 507,0,507,72,397,97,287,43,145,105,115,0

    # Optional: Camera level detect settings
    detect:
      # Optional: enables detection for the camera (default: True)
      # This value can be set via MQTT and will be updated in startup based on retained value
      enabled: True
      # Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
      max_disappeared: 25
      

    # Optional: save clips configuration
    clips:
      # Required: enables clips for the camera (default: shown below)
      # This value can be set via MQTT and will be updated in startup based on retained value
      enabled: false
      # Optional: Number of seconds before the event to include in the clips (default: shown below)
      pre_capture: 5
      # Optional: Number of seconds after the event to include in the clips (default: shown below)
      post_capture: 5
      # Optional: Objects to save clips for. (default: all tracked objects)
      objects:
        #- person
        - Stefan
        - Steffi
        - Ursel
        - Leonas
        - DHL
      # Optional: Restrict clips to objects that entered any of the listed zones (default: no required zones)
      required_zones: []
      # Optional: Camera override for retention settings (default: global values)
      retain:
        # Required: Default retention days (default: shown below)
        default: 10
        # Optional: Per object retention days
        objects:
          person: 15

    # Optional: 24/7 recording configuration
    record:
      # Optional: Enable recording (default: global setting)
      enabled: false
      # Optional: Number of days to retain (default: global setting)
      retain_days: 2          

    # Optional: RTMP re-stream configuration
    rtmp:
      # Required: Enable the live stream (default: True)
      enabled: false      

    # Optional: Configuration for the jpg snapshots published via MQTT
    mqtt:
      # Optional: Enable publishing snapshot via mqtt for camera (default: shown below)
      # NOTE: Only applies to publishing image data to MQTT via 'frigate/<camera_name>/<object_name>/snapshot'.
      # All other messages will still be published.
      enabled: false
      # Optional: print a timestamp on the snapshots (default: shown below)
      timestamp: True
      # Optional: draw bounding box on the snapshots (default: shown below)
      bounding_box: True
      # Optional: crop the snapshot (default: shown below)
      crop: false
      # Optional: height to resize the snapshot to (default: shown below)
      height: 640
      # Optional: Restrict mqtt messages to objects that entered any of the listed zones (default: no required zones)
      required_zones: []


    # Optional: Camera level object filters config.
    objects:
      track:
        - person
        #- dog
      # Optional: mask to prevent all object types from being detected in certain areas (default: no mask)
      # Checks based on the bottom center of the bounding box of the object. 
      # NOTE: This mask is COMBINED with the object type specific mask below
      #mask: 0,0,1000,0,1000,200,0,200
      # filters:
      #   person:
      #     min_area: 5000
      #     max_area: 100000
      #     min_score: 0.5
      #     threshold: 0.7
      #     # Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
      #     # Checks based on the bottom center of the bounding box of the object
      #     mask: 507,0,507,72,397,97,287,43,145,105,115,0
       

    # Optional: Configuration for the jpg snapshots written to the clips directory for each event
    snapshots:
      # Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
      # This value can be set via MQTT and will be updated in startup based on retained value
      enabled: true
      # Optional: print a timestamp on the snapshots (default: shown below)
      timestamp: true
      # Optional: draw bounding box on the snapshots (default: shown below)
      bounding_box: False
      # Optional: crop the snapshot (default: shown below)
      crop: False
      # Optional: height to resize the snapshot to (default: original size)
      height: 640
      # Optional: Restrict snapshots to objects that entered any of the listed zones (default: no required zones)
      required_zones: []
      # Optional: Camera override for retention settings (default: global values)
      retain:
        # Required: Default retention days (default: shown below)
        default: 10
        # Optional: Per object retention days
        objects:
          person: 15  

  
# Optional: Global ffmpeg args
# "ffmpeg" + global_args + input_args + "-i" + input + output_args
ffmpeg:
  # Optional: global ffmpeg args (default: shown below)
  global_args:
    - -hide_banner
    - -loglevel
    - panic
  # Optional: global hwaccel args (default: shown below)
  # NOTE: See hardware acceleration docs for your specific device
  hwaccel_args: []
  # Optional: global input args (default: shown below)
  input_args:
    - -avoid_negative_ts
    - make_zero
    - -fflags
    - nobuffer
    - -flags
    - low_delay
    - -strict
    - experimental
    - -fflags
    - +genpts+discardcorrupt
    - -rtsp_transport
    - tcp
    - -stimeout
    - '5000000'
    - -use_wallclock_as_timestamps
    - '1'
  # Optional: global output args (default: shown below)
  # output_args:
  #   - -f
  #   - rawvideo
  #   - -pix_fmt
  #   - yuv420p  


Docker Log

  • Starting nginx nginx
    ...done.
    Starting migrations
    peewee_migrate INFO : Starting migrations
    There is nothing to migrate
    peewee_migrate INFO : There is nothing to migrate
    frigate.mqtt INFO : MQTT connected
    detector.coral INFO : Starting detection process: 92
    frigate.edgetpu INFO : Attempting to load TPU as usb
    Process detector:coral:
    frigate.edgetpu INFO : No EdgeTPU detected.
    Traceback (most recent call last):
    File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 152, in load_delegate
    delegate = Delegate(library, options)
    File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 111, in init
    raise ValueError(capture.message)
    ValueError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/frigate/frigate/edgetpu.py", line 124, in run_detector
object_detector = LocalObjectDetector(tf_device=tf_device, num_threads=num_threads)
File "/opt/frigate/frigate/edgetpu.py", line 63, in init
edge_tpu_delegate = load_delegate('libedgetpu.so.1.0', device_config)
File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 154, in load_delegate
raise ValueError('Failed to load delegate from {}\n{}'.format(
ValueError: Failed to load delegate from libedgetpu.so.1.0

frigate.app INFO : Camera processor started for front: 97
frigate.app INFO : Capture process started for front: 98
frigate.watchdog INFO : Detection appears to have stopped. Exiting frigate...
frigate.app INFO : Stopping...
frigate.record INFO : Exiting recording maintenance...
frigate.events INFO : Exiting event processor...
frigate.object_processing INFO : Exiting object processor...
frigate.events INFO : Exiting event cleanup...
frigate.watchdog INFO : Exiting watchdog...
frigate.stats INFO : Exiting watchdog...
peewee.sqliteq INFO : writer received shutdown request, exiting.
root INFO : Waiting for detection process to exit gracefully...
frigate.video INFO : front: exiting subprocess

  • Starting nginx nginx
    ...done.
    Starting migrations
    peewee_migrate INFO : Starting migrations
    There is nothing to migrate
    peewee_migrate INFO : There is nothing to migrate
    frigate.mqtt INFO : MQTT connected
    detector.coral INFO : Starting detection process: 90
    frigate.edgetpu INFO : Attempting to load TPU as usb
    Process detector:coral:
    frigate.edgetpu INFO : No EdgeTPU detected.
    Traceback (most recent call last):
    File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 152, in load_delegate
    delegate = Delegate(library, options)
    File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 111, in init
    raise ValueError(capture.message)
    ValueError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/frigate/frigate/edgetpu.py", line 124, in run_detector
object_detector = LocalObjectDetector(tf_device=tf_device, num_threads=num_threads)
File "/opt/frigate/frigate/edgetpu.py", line 63, in init
edge_tpu_delegate = load_delegate('libedgetpu.so.1.0', device_config)
File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 154, in load_delegate
raise ValueError('Failed to load delegate from {}\n{}'.format(
ValueError: Failed to load delegate from libedgetpu.so.1.0

frigate.app INFO : Camera processor started for front: 93
frigate.app INFO : Capture process started for front: 96
frigate.watchdog INFO : Detection appears to have stopped. Exiting frigate...
frigate.app INFO : Stopping...
frigate.record INFO : Exiting recording maintenance...
frigate.object_processing INFO : Exiting object processor...
frigate.events INFO : Exiting event cleanup...
frigate.events INFO : Exiting event processor...
frigate.watchdog INFO : Exiting watchdog...
frigate.stats INFO : Exiting watchdog...
peewee.sqliteq INFO : writer received shutdown request, exiting.
root INFO : Waiting for detection process to exit gracefully...

  • Starting nginx nginx
    ...done.
    Starting migrations
    peewee_migrate INFO : Starting migrations
    There is nothing to migrate
    peewee_migrate INFO : There is nothing to migrate
    frigate.mqtt INFO : MQTT connected
    detector.coral INFO : Starting detection process: 91
    frigate.edgetpu INFO : Attempting to load TPU as usb
    frigate.app INFO : Camera processor started for front: 94
    frigate.edgetpu INFO : No EdgeTPU detected.
    Process detector:coral:
    frigate.app INFO : Capture process started for front: 97
    Traceback (most recent call last):
    File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 152, in load_delegate
    delegate = Delegate(library, options)
    File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 111, in init
    raise ValueError(capture.message)
    ValueError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/frigate/frigate/edgetpu.py", line 124, in run_detector
object_detector = LocalObjectDetector(tf_device=tf_device, num_threads=num_threads)
File "/opt/frigate/frigate/edgetpu.py", line 63, in init
edge_tpu_delegate = load_delegate('libedgetpu.so.1.0', device_config)
File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 154, in load_delegate
raise ValueError('Failed to load delegate from {}\n{}'.format(
ValueError: Failed to load delegate from libedgetpu.so.1.0

frigate.watchdog INFO : Detection appears to have stopped. Exiting frigate...
frigate.app INFO : Stopping...
frigate.object_processing INFO : Exiting object processor...
frigate.events INFO : Exiting event cleanup...
frigate.events INFO : Exiting event processor...
frigate.watchdog INFO : Exiting watchdog...
frigate.record INFO : Exiting recording maintenance...
frigate.app INFO : Stopping...

@corgan2222
Copy link
Author

converted_tflite_quantized.zip
converted_tflite.zip
converted_edgetpu(1).zip

here are the models for TSLite.

@blakeblackshear
Copy link
Owner

Custom models must match the input and output shapes that frigate expects. You need to compare your model to the one frigate uses by default to see what is different.

@stale
Copy link

stale bot commented Apr 2, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Apr 2, 2021
@stale stale bot removed the stale label Apr 3, 2021
@fabiopbx
Copy link

@corgan2222

frigate.edgetpu INFO : No EdgeTPU detected.

I had those error when trying this, it doesn't appear to detect/find the coral TPU, I don't have one but it went away as soon as I told it to use the CPU

@ronschaeffer
Copy link
Contributor

converted_tflite_quantized.zip
converted_tflite.zip
converted_edgetpu(1).zip

here are the models for TSLite.

Did you ever get this working. I'm interested to do something similar. Thanks

@corgan2222
Copy link
Author

I did not try this again since this post mainly because of a lack of time. I hope that @blakeblackshear implement some kind of face rec support sometimes.

@MicheleRG
Copy link

MicheleRG commented May 20, 2021

An interesting project based on frigate is https://github.com/jakowenko/double-take . I don't know if blakeblackshear is aware. I'm testing it on jetson nano.
P.s. 1. frigate is a great product, my compliments, @blakeblackshear. Thanks.
P.s. 2. I think that https://github.com/ageitgey/face_recognition would be a great expansion for frigate.

@corgan2222
Copy link
Author

P.s. 2. I think that https://github.com/ageitgey/face_recognition would be a great expansion for frigate.

100%

@mattheys
Copy link
Contributor

I had a quick look at how double-take works, I was going to do something similar myself with Azure Face Recognition and Node-Red but decided not to bother in the end as we both will have the same issue, that the snapshot that Frigate sends may not be optimised for facial recogntion. Frigate is sending you the first and subsequent higher percentage "person" detections.

Quite often part of my porch door hides a persons face, or capture is from too far away on my garage camera and would need to wait until they got closer.

It's obviously better than nothing, but it would be better to get something that could listen to the person detected event then try and do it's own face processing on the video feed rather than the snapshot so it can get the best image for faces.

@guix77
Copy link

guix77 commented May 21, 2021

Personally I use the person motion detection to trigger a Home Assistant integration that calls Deepstack Face which is GPU accelerated. With the current shortage of Edge TPUs I'll keep FR outside of Frigate and will try to help on the GPU side of things as Frigate is now.

@blakeblackshear
Copy link
Owner

The benefit of doing this directly in frigate will be efficiency for the same reasons I combine motion detection and object detection in the same process.

With something like double take, the image is converted from raw pixel data to jpeg, transmitted over the network, decoded from jpeg to raw pixel data again, copied into a new memory location, and then face recognition is done. When this is added directly to frigate, the face recognition will leverage the already decoded raw pixel data and cut out all that extra processing.

I will also be able to add faces to the object detection models to ensure I get the best face snapshot. Once I recognize a face, I will be able to connect it to the tracked object and the event itself. This allows me to stop trying to recognize faces for already identified person objects and use multiple consecutive frames to combine scores and get better accuracy.

@corgan2222
Copy link
Author

You are absolute correct @blakeblackshear.

This means you are about to implement face recognition into frigate at some time? This would be awesome!
If you need beta testers just yell! :)

@Oddsodz
Copy link

Oddsodz commented May 30, 2021

Same. I am desperate for Face recognition that works. I am in need of it for our indoor home camera setup. The use case it to keep an eye on an very elderly man with many health issues. Having Face recognition would cut out a fudge ton of alerts from the home cameras as it sees us walking about.

I am happy to beta test.

@ukrolelo
Copy link

Ohhh i am so looking forward for that, maybe i would try automated attendance for our employye access. No entrance password,noNFC tags,no fingerprint. Just blink and smile xD

@maxi1134
Copy link

This would be amazing!

I am currently using Facebox and node-red on top of Frigate for face recognition when it detects a person.
But this is very hit and miss, since it doesn't look for a face, but a person!

@jumping2000
Copy link

hi to all! any news about face recognition in Frigate?

@blakeblackshear
Copy link
Owner

Most people are using it in combination with double take for face recognition

@maxi1134
Copy link

@blakeblackshear What about Face-detection?

That would be useful for a few use cases in my end!

Such as triggering a double-take scan when a face is recognized in a certain zone for facial recognition that lets you in by itself.
(Using Node-red for the automation in this case )

@blakeblackshear
Copy link
Owner

That will be implemented directly in frigate when I roll out a custom model in a future release.

@maxi1134
Copy link

Can we hope to see this in 10.0?

@jumping2000
Copy link

Most people are using it in combination with double take for face recognition

Yes, I know. But as you told us in a previous comment, "The benefit of doing this directly in frigate will be efficiency" :-)

@blakeblackshear
Copy link
Owner

Can we hope to see this in 10.0?

No, but I am aiming to get it out this year.

@corgan2222
Copy link
Author

corgan2222 commented Nov 14, 2021

Most people are using it in combination with double take for face recognition

@blakeblackshear Yes you are right, most people, including myself, using double take for face recognition.
I self wrote the Unraid Docker Templates for double-take and facebox to run this on an unraid server.
But I think people using this, because of the lack of a proper, usable alternative. ATM, it's running beside Shinobi.

Double-take is only a workaround for a proper real-time solution. The main problem with double-take is the significant time delay. All the way from

frigate -> people detection -> mqtt -> double-take -> [face-recognition engine] -> double-take -> HA -> door opener

It takes too long and needs around 5-10sec (on proper Server Hardware incl. Google Coral and GPU support) which in unpractical for real-time needs, in my case a door opener based on face-rec.
For this reason, I have still Shinobi running, which detects faces in 1-2secs.

So it would be awesome to have some kind of hooking the frigate people detection stream with some kind of face-rec.

@blakeblackshear
Copy link
Owner

Totally agree. I was just saying double take is the best option at the moment.

@ozett
Copy link

ozett commented Nov 14, 2021

it looks to me that double-take processes frigates images and face-recoginition is done with different 3rd-party "detectors".
in my experience the detectors:

  • compreface does a good job in identifying learned and than 'known' faces.
  • deepstack is very good in recognizing faces in images (also small and hard so see faces), but not that good in identifying.

there is room for more detectors via doubl-take .
there is room for leightweight custom-model integrations via double-take and deepstack.

but all (superb face-detectiion, superc face-rec, custom-model-detection) at first hand in frigate would reduce parts in this architecture, at least.. lets wait and see next release with this features... 👍

---edit:

For this reason, I have still Shinobi running, which detects faces in 1-2secs.

@corgan2222: do you have some more information about your implementation? i would be interested to compare this with my custom-solution and my running double-take install

@corgan2222
Copy link
Author

corgan2222 commented Nov 18, 2021

For this reason, I have still Shinobi running, which detects faces in 1-2secs.

@corgan2222: do you have some more information about your implementation? i would be interested to compare this with my custom-solution and my running double-take install

Thats pretty simple.

In HA create mqtt sensors

shinobi package:

 ###############################
#
# shinobi
###############################

sensor:

#trigger
- platform: mqtt
  name: front_Person
  state_topic: shinobi/kxZcRuJLJY/front/trigger
  value_template: "{{ value_json.details.matrices[0].id }}"
  icon: mdi:human-male-height 

#confidence
- platform: mqtt
  name: front_confidence
  state_topic: shinobi/kxZcRuJLJY/front/trigger
  value_template: "{{ value_json.details.matrices[0].confidence }}"
  icon: mdi:human-male-height 

- platform: mqtt
  name: cam.front.Motion_confidence
  state_topic: shinobi/kxZcRuJLJY/front/trigger
  value_template: >-
     {% if value_json.details.reason == 'motion'  %}
      {{ value_json.details.confidence | round(0) }}
     {% endif %}    
  icon: mdi:incognito
  payload_not_available: "0"  

- platform: mqtt
  name: cam.front.Motion_reason
  state_topic: shinobi/kxZcRuJLJY/front/trigger
  value_template: >-
    {% if value_json.details.reason == 'motion'  %}
      {{ value_json.details.reason }}
    {% endif %}    
  payload_not_available: ""
  icon: mdi:alert-circle

- platform: mqtt
  name: cam.front.Motion_region
  state_topic: shinobi/kxZcRuJLJY/front/trigger
  value_template: >-
    {% if value_json.details.reason == 'motion'  %}
      {{ value_json.details.name }}
    {% endif %}    
  payload_not_available: ""
  icon: mdi:drag-variant

- platform: mqtt
  name: cam.front.Motion_time
  state_topic: shinobi/kxZcRuJLJY/front/trigger
  value_template: >-
    {% if value_json.details.reason == 'motion'  %}
      {{ value_json.currentTimestamp }}
    {% endif %}   
  icon: mdi:av-timer
  device_class: timestamp




binary_sensor:

- platform: mqtt
  name: Front_Motion_Stefan
  state_topic: shinobi/kxZcRuJLJY/front/trigger
  payload_on: motion
  payload_off: no motion
  value_template: >-
    {% set shinobi = namespace(person_detected=false) %}
    {% for matrix in value_json.details.matrices %}
      {% if value_json.details.matrices[loop.index0].tag == 'Stefan'  %}
        {% set shinobi.person_detected = true %}
      {% endif %}
    {% endfor %}
    {% if shinobi.person_detected %}
        motion
    {% else %}
        no motion
    {% endif %}
  off_delay: 5
  expire_after: 10
  device_class: motion

- platform: mqtt
  name: Front_Motion_Stefan_delay
  state_topic: shinobi/kxZcRuJLJY/front/trigger
  payload_on: motion
  payload_off: no motion
  value_template: >-
    {% set shinobi = namespace(person_detected=false) %}
    {% for matrix in value_json.details.matrices %}
      {% if value_json.details.matrices[loop.index0].tag == 'Stefan'  %}
        {% set shinobi.person_detected = true %}
      {% endif %}
    {% endfor %}
    {% if shinobi.person_detected %}
        motion
    {% else %}
        no motion
    {% endif %}
  expire_after: 60
  off_delay: 60
  device_class: motion

automation:

- id: '1602485767351'
  alias: Face recognition - ESP Opener
  description: ''
  trigger:
  - platform: state
    entity_id: binary_sensor.front_motion_stefan
    to: 'on'
  - platform: state
    entity_id: binary_sensor.front_motion_steffi
    to: 'on'
  condition: []
  action:
  - type: turn_on
    device_id: 22c7bc60925d40cb8b32e026ecc7f7f2
    entity_id: switch.esp32_frontdoor_opener
    domain: switch
  - service: camera.snapshot
    data:
      entity_id: camera.front_person
      filename: /config/www/snapshots/door_open/door_open_{{ now().strftime("%Y%m%d-%H%M")}}.jpg
  - service: notify.slack
    data_template:
      message: '{{ trigger.to_state.name }} Known Face detected.  Open Door 
                Confidence: {{ states("sensor.front_confidence")}}                
                <https://URL/lovelace/esp|Homeassistant> | <https://URL/lovelace/cams|Image> | <https://URL/jXavSrgGOX4bH0BA9P6Oo4uIrQpbE7/mjpeg/kxZcRuJLJY/front |Stream> '
      title: '{{ trigger.to_state.name }} - Confidence: {{ states("sensor.front_confidence")}} '
      data:
        file:
          path: /config/www/snapshots/door_open/door_open_{{ now().strftime("%Y%m%d-%H%M")}}.jpg
  mode: single

second security layer with ESPresence:

automation

- id: 'aut_face_rec-ble_front'
  alias: Face recognition and then BLE Tracker Stefan
  description: 'Open Homedoor if Face is detected and Bluetoth is presence'
  trigger:
  - platform: state
    entity_id: binary_sensor.Front_Motion_Stefan_delay
    to: 'on'
  condition:
    condition: and
    conditions:
      - condition: state
        entity_id: sensor.espresense_stefan_iphone
        state: "front"
      - condition: template
        value_template: '{{ (as_timestamp(now()) - as_timestamp(states.automation.face_recognition_and_then_ble_tracker.attributes.last_triggered | default(0)) | int > 30)}}'    
  action:
  - delay: '00:00:10'
  - service: switch.turn_on
    entity_id: switch.switch_smartlock_open
  - service: notify.slack
    data_template:
      message: 'Stefan detected with Face and iPhone. Open both doors!'

system resource compare with only one camera

grafik

@ozett
Copy link

ozett commented Nov 24, 2021

@corgan2222 many thanks for this detailed information. 🤝
i now started to look into it and try to see if i may use this in my environment.
(edit: booster an presence detection, cool! https://github.com/ESPresense/ad-espresense-ips)

edit: wow, i discovered your 2 factor-auth..great! thats something to rebuild at my doors!
message: 'Stefan detected with Face and iPhone. Open both doors!

@Hukuma1
Copy link

Hukuma1 commented Feb 23, 2022

Just got Frigate up and running and went down the rabbit hole of setting up Double Take with Deepstack as well. The facial recognition works pretty well, if the camera source is big enough to capture faces. But if your cameras are setup higher up and in corners to cover max field of view, and sometimes with limited lighting (e.g. evening/dusk) the faces are very hard to read. Let alone analyze and identify.

Is there anyway to do body type detection to aid in identifying the person? Or would this be too strenuous on the hardware? I got a small household and figured we could train our faces, but also body types. I just want the "perfect" presence detection setup and thought facial recognition would be it, until I realized the input images the cameras are sending are probably too small to get fast and accurate results. To expand the box would mean to expand on what you're scanning, e.g. body, right?

I'd even settle for MAN/WOMAN classification instead of just PERSON right now. Or is this already possible? Or is this basically the custom model training feature that may eventually arrive to Frigate? ;)

@ozett
Copy link

ozett commented Feb 23, 2022

man/woman could be differentiated via face-detection -> male/female
but good images of faces are needed, other than a person-outline

@j0rd
Copy link

j0rd commented Apr 12, 2022

Facenet on Coral Dev Board github
https://github.com/terry-voth/facenet-on-coral-dev-board

Good starting place for others looking to get this direct in Frigate, as the code is also Python and it has some training folders. Currently only does Drake vs Mathew McConaughey. Author says it's not performant though, so some tweaks or rewrites are in order to make it more efficient.

Since I assume most people are interested in training for their family and some common neighbors / regular visitors, the amount of training people will need won't be much.

Essentially you'd need to run face detect and store those images on your Frigate instance, and then at some point go and label & group all the ones you want to train for, then train the model after you have enough data.

After that, I assume you want to be able to ignore or alert for particular face labels. Some might want to ignore family for instance....except for maybe front door camera.

There's a lot of UX/UI stuff that needs to get done for this unfortunately if you want it done in Frigate....but it might be able to be abstracted into a more generic labeler & learning interface for training other models in the future.

@dennisoderwald
Copy link

@blakeblackshear Have you worked on this again or in a timely manner? :)

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Sep 6, 2022

@blakeblackshear Have you worked on this again or in a timely manner? :)

Frigate+ has a face label so it will be able to recognize face. This is good for other purposes along with that doubletake could be set to only fire on face so it doesn't waste time trying to recognize a face when a person is not facing the camera.

To be clear this is a general face label, not specific face recognition that is currently in frigate+

luoj1 pushed a commit to luoj1/frigate that referenced this issue Apr 29, 2023
@NickM-27 NickM-27 added the enhancement New feature or request label Aug 18, 2023
@felipecrs
Copy link
Contributor

@NickM-27 I started experimenting with Double Take today with Frigate. I have some comments/suggestions:

  1. It would be really great if Frigate itself (not Frigate+) had support for such face label. Or even support for sub-labeling person with face when a face is detected by Frigate. That's to reduce number of processing done by Double Take and its detectors on images which can potentially not even have a face.
  2. It would be really helpful as well if frigate/<camera_name>/<object_name>/snapshot (MQTT) could be published more often, perhaps as soon as a higher confidence frame is found, rather than only after the object is no longer detected.
    • With this, Double Take would not need to keep trying to process latest and snapshot images.
  3. Option to tweak the camera MQTT config per object, as I don't need to publish higher quality images for anything other than person.
  4. Option to set a global MQTT camera config, so that I don't need to set it per camera.

Probably not everything is worth implementing, but I just wanted to share my thoughts.

@NickM-27
Copy link
Sponsor Collaborator

@NickM-27 I started experimenting with Double Take today with Frigate. I have some comments/suggestions:

  1. It would be really great if Frigate itself (not Frigate+) had support for such face label.

I don't understand the distinction here. Something has to be able to detect face. That label is not in the COCO dataset so it must come from some other model like frigate+

Or even support for sub-labeling person with face when a face is detected by Frigate. That's to reduce number of processing done by Double Take and its detectors on images which can potentially not even have a face.

No need for a face sub label, this data is already held in the attributes field in the DB and in MQTT payload.

  1. It would be really helpful as well if frigate/<camera_name>/<object_name>/snapshot (MQTT) could be published more often, perhaps as soon as a higher confidence frame is found, rather than only after the object is no longer detected.

This is controlled by the best_image_timeout field

@felipecrs
Copy link
Contributor

Got it, it all makes sense. Thank you.

@mtthidoteu
Copy link

It would be good if the face label was published somehow, so that we could use Frigate's facial detection to detect the faces, and double take would not need to check if there is already a face on the image!

@NickM-27
Copy link
Sponsor Collaborator

It is published on mqtt and in the events. I have an automation that does this already

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request pinned
Projects
None yet
Development

No branches or pull requests