This repository contains a set of experiments for liveness detection.
Why you need an algorithm that is able to perform liveness detection? Consider that you have a face recognition system and a certain user tried to purposely circumvent your face recognition system. Such a user could try to hold up a photo of another person. Maybe they even have a photo or video on their smartphone that they could hold up to the camera responsible for performing face recognition (such as in the image at the top of this post).
In those situations it’s entirely possible for the face held up to the camera to be correctly recognized…but ultimately leading to an unauthorized user bypassing your face recognition system!
** Real person **
** Fake person **
There are a number of approaches to liveness detection, including:
-
Texture analysis, including computing Local Binary Patterns (LBPs) over face regions and using an SVM to classify the faces as real or spoofed.
-
Frequency analysis, such as examining the Fourier domain of the face.
-
Variable focusing analysis, such as examining the variation of pixel values between two consecutive frames.
- Heuristic-based algorithms, including eye movement, lip movement, and blink detection. These set of algorithms attempt to track eye movement and blinks to ensure the user is not holding up a photo of another person (since a photo will not blink or move its lips).
- Optical Flow algorithms, namely examining the differences and properties of optical flow generated from 3D objects and 2D planes.
-
3D face shape, similar to what is used on Apple’s iPhone face recognition system, enabling the face recognition system to distinguish between real faces and printouts/photos/images of another person.
-
deep learning approaches: build an adapted network for liveness detection or use a transfer learning approach
Combinations of the above, enabling a face recognition system engineer to pick and choose the liveness detections models appropriate for their particular application.
- run 1_extractFaces.py - extracts the faces from test images
-
Feature extraction:
-
run 2_computeHaralickFeatures.py' extracts the Haralick features. See more details on https://iab-rubric.org/papers/BTAS16-Anti-Spoofing.pdf
-
run 2_computeBoWFeatures.py --dictionarySize 512 --descriptorType SIFT extract Bag of Words features (expects two parameters: descriptor type and dictionary size)
-
run 2_computeHofFeatures.py extracts the HoG features.
-
run 2_computeLBPFeatures.py' extracts the LBP features.
-
-
Test features:
- run 3_testFeatures.py compute the accuracy performance for each feature / classifier (Nearest Neighbors, SVM, SGD, Naive Bayes, Decision Trees, Adaboost, Gradient Boosting, Random Forest, Extremelly RandomForest)
- Create a net from scratch:
-
run extract_faces.py - Detect faces from each image from dataset and save to
face_dataset
folder -
run train_livenessNet.py - Train network with face images set extracted before and classify to real and spoofed(fake)
-
run test_livenessNet.py - Test and use trained model
Note: run augment_img.py if augmented data needed. Augmentor library is used
- Transfer learning with MobileNet ang Resnet50:
Using keras MobileNet v2
- run mobile_net.py to train model.
- We are not using weights pre-trained on ImageNet.
layer.trainable=True
- Using Adam optimizer
- Loss function will be binary cross entropy due to the fact that we only have 2 classes, real and fake
- We are not using weights pre-trained on ImageNet.
- run test_model.py to run model on new images to test its performance.
Liveness CNN Results
Confusion Matrix
Spoofed | Real | |
---|---|---|
Spoofed | 3960 | 353 |
Real | 406 | 3509 |
Precision | Recall | |
---|---|---|
Spoofed | 0.91 | 0.92 |
Real | 0.91 | 0.90 |
MobileNet Results
Confusion Matrix
Spoofed | Real | |
---|---|---|
Spoofed | 3898 | 415 |
Real | 802 | 3113 |
Precision | Recall | |
---|---|---|
Spoofed | 0.84 | 0.90 |
Real | 0.88 | 0.84 |
Packages
- tensorflow version 2.2.0
- keras version 2.4.3