-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FR] Face recognition #260
Comments
It's possible, but probably not going to be a focus in the near future. |
How about sending the detected person image from MQTT to a second project like https://github.com/JanLoebel/face_recognition using something like NodeRed to glue it all together? Only suggest it because I'm going to look at doing the same thing. Haven't evaluated that project yet so there may be better ones out there. |
This may work. But I'm using the face recognition to open our front door. Even with let handling Shinobi this directly, the system need sometime several seconds. I'm afraid, the adding one more step, the daily gets worse. Actually I want to have only one tool for all the work (motion, people, faces), that's why I asked in the first place. ;) |
So frigate already accepts custom models and there are several tflite ones for facial recognition. IMHO If you are able to cross-train a model with your faces this should already work with the current code. Instead of person or car you would have the persons you trained as object label designations. As I assume that you want to open the door not to any human face some training would need to be done in any case which would, given the limitations of e.g. the coral detector, be needed to be carried out on another preferably GPU powered platform anyway. |
I have never trained my own custom model, but I like the idea. Maybe i give that a try. The nice thing about Shinobi is, that YOLO, face-rec, and FFmpeg running on GPU inside an unraid docker container. But I also have a Google coral. |
So a good starting point would be actually the coral.ai homepage itself. Have a look at https://coral.ai/examples/. There's also examples for cross-training for object recognition to get everything set up and familiarise yourself with the topic. For the face recognition part I had some success with with this tutorial, which is for Tensorflow (GPU/CPU) and would need to be converted to be able to run on the Coral (TFlite format). It's been a while since I looked into this, but seems like people got mobilefacenet to run on the coral so it's possible. |
Thanks a lot! Will give this a try on the weekend. |
@corgan2222 Did you have a chance to try to train your own model for face recognition? Can you share how to do it? |
Also interested. But I never really tried to train a net, probably 'cause lack of time. :( |
No, but I didn't tried after looked into it. :( |
Still very interested in face recognition. Is there any chance on getting this kind of feature? |
Definitely. I will be looking at this with custom models soon. |
Sounds great! I now have a trained model, but I'm not sure about the existing models. Do i have to combine it with the existing one to keep detecting of persons/cars ect. ? |
Yes |
I tried the last night, but I'm stuck and have no clue how to solve it. What have I'm done so far:
Then I changed the docker compose file to mount the new model and the labelmap docker-compose
Config
Docker Log
|
converted_tflite_quantized.zip here are the models for TSLite. |
Custom models must match the input and output shapes that frigate expects. You need to compare your model to the one frigate uses by default to see what is different. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
frigate.edgetpu INFO : No EdgeTPU detected. I had those error when trying this, it doesn't appear to detect/find the coral TPU, I don't have one but it went away as soon as I told it to use the CPU |
Did you ever get this working. I'm interested to do something similar. Thanks |
I did not try this again since this post mainly because of a lack of time. I hope that @blakeblackshear implement some kind of face rec support sometimes. |
An interesting project based on frigate is https://github.com/jakowenko/double-take . I don't know if blakeblackshear is aware. I'm testing it on jetson nano. |
100% |
I had a quick look at how double-take works, I was going to do something similar myself with Azure Face Recognition and Node-Red but decided not to bother in the end as we both will have the same issue, that the snapshot that Frigate sends may not be optimised for facial recogntion. Frigate is sending you the first and subsequent higher percentage "person" detections. Quite often part of my porch door hides a persons face, or capture is from too far away on my garage camera and would need to wait until they got closer. It's obviously better than nothing, but it would be better to get something that could listen to the person detected event then try and do it's own face processing on the video feed rather than the snapshot so it can get the best image for faces. |
Personally I use the person motion detection to trigger a Home Assistant integration that calls Deepstack Face which is GPU accelerated. With the current shortage of Edge TPUs I'll keep FR outside of Frigate and will try to help on the GPU side of things as Frigate is now. |
The benefit of doing this directly in frigate will be efficiency for the same reasons I combine motion detection and object detection in the same process. With something like double take, the image is converted from raw pixel data to jpeg, transmitted over the network, decoded from jpeg to raw pixel data again, copied into a new memory location, and then face recognition is done. When this is added directly to frigate, the face recognition will leverage the already decoded raw pixel data and cut out all that extra processing. I will also be able to add faces to the object detection models to ensure I get the best face snapshot. Once I recognize a face, I will be able to connect it to the tracked object and the event itself. This allows me to stop trying to recognize faces for already identified person objects and use multiple consecutive frames to combine scores and get better accuracy. |
You are absolute correct @blakeblackshear. This means you are about to implement face recognition into frigate at some time? This would be awesome! |
Same. I am desperate for Face recognition that works. I am in need of it for our indoor home camera setup. The use case it to keep an eye on an very elderly man with many health issues. Having Face recognition would cut out a fudge ton of alerts from the home cameras as it sees us walking about. I am happy to beta test. |
Ohhh i am so looking forward for that, maybe i would try automated attendance for our employye access. No entrance password,noNFC tags,no fingerprint. Just blink and smile xD |
This would be amazing! I am currently using Facebox and node-red on top of Frigate for face recognition when it detects a person. |
hi to all! any news about face recognition in Frigate? |
Most people are using it in combination with double take for face recognition |
@blakeblackshear What about Face-detection? That would be useful for a few use cases in my end! Such as triggering a double-take scan when a face is recognized in a certain zone for facial recognition that lets you in by itself. |
That will be implemented directly in frigate when I roll out a custom model in a future release. |
Can we hope to see this in 10.0? |
Yes, I know. But as you told us in a previous comment, "The benefit of doing this directly in frigate will be efficiency" :-) |
No, but I am aiming to get it out this year. |
@blakeblackshear Yes you are right, most people, including myself, using double take for face recognition. Double-take is only a workaround for a proper real-time solution. The main problem with double-take is the significant time delay. All the way from frigate -> people detection -> mqtt -> double-take -> [face-recognition engine] -> double-take -> HA -> door opener It takes too long and needs around 5-10sec (on proper Server Hardware incl. Google Coral and GPU support) which in unpractical for real-time needs, in my case a door opener based on face-rec. So it would be awesome to have some kind of hooking the frigate people detection stream with some kind of face-rec. |
Totally agree. I was just saying double take is the best option at the moment. |
it looks to me that double-take processes frigates images and face-recoginition is done with different 3rd-party "detectors".
there is room for more detectors via doubl-take . but all (superb face-detectiion, superc face-rec, custom-model-detection) at first hand in frigate would reduce parts in this architecture, at least.. lets wait and see next release with this features... 👍 ---edit:
@corgan2222: do you have some more information about your implementation? i would be interested to compare this with my custom-solution and my running double-take install |
Thats pretty simple.
In HA create mqtt sensors shinobi package: ###############################
#
# shinobi
###############################
sensor:
#trigger
- platform: mqtt
name: front_Person
state_topic: shinobi/kxZcRuJLJY/front/trigger
value_template: "{{ value_json.details.matrices[0].id }}"
icon: mdi:human-male-height
#confidence
- platform: mqtt
name: front_confidence
state_topic: shinobi/kxZcRuJLJY/front/trigger
value_template: "{{ value_json.details.matrices[0].confidence }}"
icon: mdi:human-male-height
- platform: mqtt
name: cam.front.Motion_confidence
state_topic: shinobi/kxZcRuJLJY/front/trigger
value_template: >-
{% if value_json.details.reason == 'motion' %}
{{ value_json.details.confidence | round(0) }}
{% endif %}
icon: mdi:incognito
payload_not_available: "0"
- platform: mqtt
name: cam.front.Motion_reason
state_topic: shinobi/kxZcRuJLJY/front/trigger
value_template: >-
{% if value_json.details.reason == 'motion' %}
{{ value_json.details.reason }}
{% endif %}
payload_not_available: ""
icon: mdi:alert-circle
- platform: mqtt
name: cam.front.Motion_region
state_topic: shinobi/kxZcRuJLJY/front/trigger
value_template: >-
{% if value_json.details.reason == 'motion' %}
{{ value_json.details.name }}
{% endif %}
payload_not_available: ""
icon: mdi:drag-variant
- platform: mqtt
name: cam.front.Motion_time
state_topic: shinobi/kxZcRuJLJY/front/trigger
value_template: >-
{% if value_json.details.reason == 'motion' %}
{{ value_json.currentTimestamp }}
{% endif %}
icon: mdi:av-timer
device_class: timestamp
binary_sensor:
- platform: mqtt
name: Front_Motion_Stefan
state_topic: shinobi/kxZcRuJLJY/front/trigger
payload_on: motion
payload_off: no motion
value_template: >-
{% set shinobi = namespace(person_detected=false) %}
{% for matrix in value_json.details.matrices %}
{% if value_json.details.matrices[loop.index0].tag == 'Stefan' %}
{% set shinobi.person_detected = true %}
{% endif %}
{% endfor %}
{% if shinobi.person_detected %}
motion
{% else %}
no motion
{% endif %}
off_delay: 5
expire_after: 10
device_class: motion
- platform: mqtt
name: Front_Motion_Stefan_delay
state_topic: shinobi/kxZcRuJLJY/front/trigger
payload_on: motion
payload_off: no motion
value_template: >-
{% set shinobi = namespace(person_detected=false) %}
{% for matrix in value_json.details.matrices %}
{% if value_json.details.matrices[loop.index0].tag == 'Stefan' %}
{% set shinobi.person_detected = true %}
{% endif %}
{% endfor %}
{% if shinobi.person_detected %}
motion
{% else %}
no motion
{% endif %}
expire_after: 60
off_delay: 60
device_class: motion automation: - id: '1602485767351'
alias: Face recognition - ESP Opener
description: ''
trigger:
- platform: state
entity_id: binary_sensor.front_motion_stefan
to: 'on'
- platform: state
entity_id: binary_sensor.front_motion_steffi
to: 'on'
condition: []
action:
- type: turn_on
device_id: 22c7bc60925d40cb8b32e026ecc7f7f2
entity_id: switch.esp32_frontdoor_opener
domain: switch
- service: camera.snapshot
data:
entity_id: camera.front_person
filename: /config/www/snapshots/door_open/door_open_{{ now().strftime("%Y%m%d-%H%M")}}.jpg
- service: notify.slack
data_template:
message: '{{ trigger.to_state.name }} Known Face detected. Open Door
Confidence: {{ states("sensor.front_confidence")}}
<https://URL/lovelace/esp|Homeassistant> | <https://URL/lovelace/cams|Image> | <https://URL/jXavSrgGOX4bH0BA9P6Oo4uIrQpbE7/mjpeg/kxZcRuJLJY/front |Stream> '
title: '{{ trigger.to_state.name }} - Confidence: {{ states("sensor.front_confidence")}} '
data:
file:
path: /config/www/snapshots/door_open/door_open_{{ now().strftime("%Y%m%d-%H%M")}}.jpg
mode: single second security layer with ESPresence: automation - id: 'aut_face_rec-ble_front'
alias: Face recognition and then BLE Tracker Stefan
description: 'Open Homedoor if Face is detected and Bluetoth is presence'
trigger:
- platform: state
entity_id: binary_sensor.Front_Motion_Stefan_delay
to: 'on'
condition:
condition: and
conditions:
- condition: state
entity_id: sensor.espresense_stefan_iphone
state: "front"
- condition: template
value_template: '{{ (as_timestamp(now()) - as_timestamp(states.automation.face_recognition_and_then_ble_tracker.attributes.last_triggered | default(0)) | int > 30)}}'
action:
- delay: '00:00:10'
- service: switch.turn_on
entity_id: switch.switch_smartlock_open
- service: notify.slack
data_template:
message: 'Stefan detected with Face and iPhone. Open both doors!'
system resource compare with only one camera |
@corgan2222 many thanks for this detailed information. 🤝 edit: wow, i discovered your 2 factor-auth..great! thats something to rebuild at my doors! |
Just got Frigate up and running and went down the rabbit hole of setting up Double Take with Deepstack as well. The facial recognition works pretty well, if the camera source is big enough to capture faces. But if your cameras are setup higher up and in corners to cover max field of view, and sometimes with limited lighting (e.g. evening/dusk) the faces are very hard to read. Let alone analyze and identify. Is there anyway to do body type detection to aid in identifying the person? Or would this be too strenuous on the hardware? I got a small household and figured we could train our faces, but also body types. I just want the "perfect" presence detection setup and thought facial recognition would be it, until I realized the input images the cameras are sending are probably too small to get fast and accurate results. To expand the box would mean to expand on what you're scanning, e.g. body, right? I'd even settle for MAN/WOMAN classification instead of just PERSON right now. Or is this already possible? Or is this basically the custom model training feature that may eventually arrive to Frigate? ;) |
man/woman could be differentiated via face-detection -> male/female |
Facenet on Coral Dev Board github Good starting place for others looking to get this direct in Frigate, as the code is also Python and it has some training folders. Currently only does Drake vs Mathew McConaughey. Author says it's not performant though, so some tweaks or rewrites are in order to make it more efficient. Since I assume most people are interested in training for their family and some common neighbors / regular visitors, the amount of training people will need won't be much. Essentially you'd need to run face detect and store those images on your Frigate instance, and then at some point go and label & group all the ones you want to train for, then train the model after you have enough data. After that, I assume you want to be able to ignore or alert for particular face labels. Some might want to ignore family for instance....except for maybe front door camera. There's a lot of UX/UI stuff that needs to get done for this unfortunately if you want it done in Frigate....but it might be able to be abstracted into a more generic labeler & learning interface for training other models in the future. |
@blakeblackshear Have you worked on this again or in a timely manner? :) |
Frigate+ has a To be clear this is a general face label, not specific face recognition that is currently in frigate+ |
…atch-1 Update README.md
@NickM-27 I started experimenting with Double Take today with Frigate. I have some comments/suggestions:
Probably not everything is worth implementing, but I just wanted to share my thoughts. |
I don't understand the distinction here. Something has to be able to detect
No need for a face sub label, this data is already held in the
This is controlled by the best_image_timeout field |
Got it, it all makes sense. Thank you. |
It would be good if the face label was published somehow, so that we could use Frigate's facial detection to detect the faces, and double take would not need to check if there is already a face on the image! |
It is published on mqtt and in the events. I have an automation that does this already |
Is there any change to implement face recognition?
Atm i have shinoby running for that, but it really outdatet and lack of features.
Having face recognition build into frigate would be awesome!
thanks
https://missinglink.ai/guides/tensorflow/tensorflow-face-recognition-three-quick-tutorials/
https://github.com/search?q=Face+Recognition+Tensorflow
The text was updated successfully, but these errors were encountered: