Skip to content

eexzy/facenet

 
 

Repository files navigation

Face Recognition using Tensorflow Build Status

This is a TensorFlow implementation of the face recognizer described in the paper "FaceNet: A Unified Embedding for Face Recognition and Clustering". The project also uses ideas from the paper "A Discriminative Feature Learning Approach for Deep Face Recognition" as well as the paper "Deep Face Recognition" from the Visual Geometry Group at Oxford.

Compatibility

The code is tested using Tensorflow r1.2 under Ubuntu 14.04 with Python 2.7 and Python 3.5. The test cases can be found here and the results can be found here.

News

Date Update
2017-05-13 Removed a bunch of older non-slim models. Moved the last bottleneck layer into the respective models. Corrected normalization of Center Loss.
2017-05-06 Added code to train a classifier on your own images. Renamed facenet_train.py to train_tripletloss.py and facenet_train_classifier.py to train_softmax.py.
2017-03-02 Added pretrained models that generate 128-dimensional embeddings.
2017-02-22 Updated to Tensorflow r1.0. Added Continuous Integration using Travis-CI.
2017-02-03 Added models where only trainable variables has been stored in the checkpoint. These are therefore significantly smaller.
2017-01-27 Added a model trained on a subset of the MS-Celeb-1M dataset. The LFW accuracy of this model is around 0.994.
2017‑01‑02 Updated to code to run with Tensorflow r0.12. Not sure if it runs with older versions of Tensorflow though.

Pre-trained models

Model name LFW accuracy Training dataset Architecture
20170511-185253 0.987 CASIA-WebFace Inception ResNet v1
20170512-110547 0.992 MS-Celeb-1M Inception ResNet v1

Inspiration

The code is heavily inspired by the OpenFace implementation.

Training data

The CASIA-WebFace dataset has been used for training. This training set consists of total of 453 453 images over 10 575 identities after face detection. Some performance improvement has been seen if the dataset has been filtered before training. Some more information about how this was done will come later. The best perfor