CN111222433A - Automatic face auditing method, system, equipment and readable storage medium - Google Patents

Automatic face auditing method, system, equipment and readable storage medium Download PDF

Info

Publication number
CN111222433A
CN111222433A CN201911387766.9A CN201911387766A CN111222433A CN 111222433 A CN111222433 A CN 111222433A CN 201911387766 A CN201911387766 A CN 201911387766A CN 111222433 A CN111222433 A CN 111222433A
Authority
CN
China
Prior art keywords
face
picture
quality
automatic
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911387766.9A
Other languages
Chinese (zh)
Other versions
CN111222433B (en
Inventor
刘小扬
王心莹
王欢
徐小丹
黄泽斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Newland Digital Technology Co ltd
Original Assignee
Newland Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Newland Digital Technology Co ltd filed Critical Newland Digital Technology Co ltd
Priority to CN201911387766.9A priority Critical patent/CN111222433B/en
Publication of CN111222433A publication Critical patent/CN111222433A/en
Application granted granted Critical
Publication of CN111222433B publication Critical patent/CN111222433B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Tourism & Hospitality (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Evolutionary Biology (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Educational Administration (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic face auditing method, which comprises the following steps: detecting the face, namely detecting the coordinates of a face rectangular frame and the coordinates of face key points through a cascading neural network algorithm; evaluating and screening the quality of the human face, namely evaluating the quality of the human face of the quality attributes of a plurality of human face pictures and screening high-quality pictures; detecting a living body, namely detecting whether the picture is a real person or not by using a double-flow convolutional neural network, and filtering the picture which is judged to be a non-real person; and (3) face comparison authentication, namely extracting a face characteristic vector of the picture, comparing the similarity between the face characteristic vector of the picture and the face characteristic vector of the standard picture, and outputting a comparison result. The invention realizes the full automation of picture audit, does not need manual operation, reduces the labor cost, filters pictures with low similarity degree and reduces the manual error. In addition, each invention uses a cascading picture filtering mode, and the speed is high.

Description

Automatic face auditing method, system, equipment and readable storage medium
Technical Field
The invention relates to the technical field of image recognition processing, in particular to an automatic face auditing method, system, equipment and a readable storage medium.
Background
With the rapid development of mobile devices and mobile internet, many government platforms and enterprise platforms have advanced with time to integrate offline services and online services in order to facilitate the public to handle the work more conveniently and conveniently. For example, governments in various places accelerate the construction of online government service platforms, mobile phone operators open online business halls, and the like. Many online services need to upload photos and archive the photos as required, for example, mobile phone operators need to upload personal photos for online mobile phone number handling services, identity card front photos and identity card back photos, and skatecat, Taobao and other services of shops independently applying for shop opening need to upload photos of personal handheld identity cards. In the prior art, the personal photos uploaded by the user are mainly audited and judged manually. However, the manual operation is relied on to check whether the personal photos uploaded by the user are in compliance or not, whether the personal photos are in compliance or not is checked, and the like, so that the working efficiency is low, the labor cost is high, human errors are inevitably generated in the process, and the feedback cannot be timely given to the user.
Disclosure of Invention
The invention aims to provide a quick and efficient automatic face auditing method, a system, equipment and a readable storage medium.
In order to solve the technical problems, the technical scheme of the invention is as follows:
in a first aspect, the present invention provides an automatic face audit method, which includes the steps of:
detecting the face, namely detecting the coordinates of a face rectangular frame and the coordinates of face key points through a cascading neural network algorithm;
evaluating and screening the quality of the human face, namely evaluating the quality of the human face of the quality attributes of a plurality of human face pictures and screening high-quality pictures;
detecting a living body, namely detecting whether the picture is a real person or not by using a double-flow convolutional neural network, and filtering the picture which is judged to be a non-real person;
and (3) face comparison authentication, namely extracting a face characteristic vector of the picture, comparing the similarity between the face characteristic vector of the picture and the face characteristic vector of the standard picture, and filtering the picture with low similarity.
Preferably, before the face detection, the method further comprises filtering the pictures with the resolution outside the threshold range.
Preferably, after the face detection, the method further comprises:
filtering out pictures with the proportion of the size of the face rectangular frame to the size of the pictures lower than a threshold value;
and filtering out pictures with the distance between two eyes of the human face lower than a threshold value.
Preferably, after the face detection, the method further comprises:
and aligning the face, calculating a transformation matrix between the key point coordinates of the face of a picture and the key point coordinates of a pre-stored standard face, and acting the transformation matrix on the picture to obtain an aligned face image.
Preferably, the process of face comparison authentication is as follows:
extracting the high-quality picture, outputting a 512-dimensional floating point vector by using a 50-layer ResNet neural network, and recording the 512-dimensional floating point vector as a face feature vector;
by comparing the similarity between the face feature vector of the current picture and the face feature vector of the standard picture, the formula is as follows:
Figure BDA0002344033060000021
wherein S isiFor the face feature vector of the current picture, SjThe face feature vector is a standard picture;
if the similarity degree is lower than a threshold value, judging that the testimonies are not uniform; and if the similarity degree is higher than the threshold value, judging that the testimony is uniform.
Preferably, the quality attributes used for face quality assessment include: the quality attributes used for the human face quality evaluation comprise human face posture, eye state, mouth state, makeup state, overall brightness, left and right face brightness difference, ambiguity and occlusion;
the face posture, the eye state, the mouth state, the makeup state, the ambiguity and the shielding all adopt a MobileFaceNet structure as a main body to construct a multitask convolution neural network, and a plurality of task outputs respectively correspond to all quality attributes of the face.
Eye state, mouth state, makeup state and face shielding are classified tasks, and a softmax loss function is adopted as a target function;
the human face posture, the image illuminance and the image fuzziness are regression tasks, and an Euclidean loss function is adopted as a target function;
the total objective function of the network training comprises a combination of a plurality of Softmax loss functions and Euclidean loss functions, and when a plurality of tasks are jointly learned, the total objective function is a linear combination of the plurality of loss functions.
Preferably, the process of in vivo detection is:
acquiring a depth image, and carrying out normalization processing on a face region in a picture to obtain a processed face depth image;
inputting an RGB face image with a face ID preset frame number and the face depth image into a deep learning network for detection to obtain a living body judgment result of each frame of image;
voting is performed on all living body judgment results of the face ID, and when the number of the living body judgment results is judged to be large, the object is determined to be a living body, and when the number of the attacking frames is judged to be large, the object is determined to be a non-living body.
Preferably, the process of in vivo detection is:
intercepting a human face from an original image, converting an RGB channel into HSV and YCbCr spaces, and overlapping the converted HSV and YCbCr images to obtain an overlay image; extracting Sobel features from the face region through a Sobel operator, and obtaining an obtained Sobel feature map;
inputting the Sobel feature map and the superimposed map of a preset frame number of face ID from two input channels of the double-flow neural network respectively to obtain a living body judgment result of each frame of image;
voting is performed on all living body judgment results of the face ID, and when the number of the living body judgment results is judged to be large, the object is determined to be a living body, and when the number of the attacking frames is judged to be large, the object is determined to be a non-living body.
In a second aspect, the present invention further provides an automatic face audit system, including:
the face detection module detects a face rectangular frame and face key points through a cascading neural network algorithm;
the human face quality evaluation module is used for carrying out human face quality evaluation according to the quality attributes of the human face pictures and screening high-quality pictures;
the living body detection module detects whether the picture is a real person or not by using a double-flow convolutional neural network, and filters the picture without the real person;
and the face comparison module is used for extracting the face characteristic vector of the picture, comparing the similarity between the face characteristic vector of the picture and the face characteristic vector of the pre-stored certificate photo, and filtering the picture with low similarity.
In a third aspect, the present invention further provides an automatic face audit device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the steps of the automatic face audit method when executing the program.
In a fourth aspect, the present invention further provides a readable storage medium for automatic face auditing, wherein the readable storage medium stores a computer program, and the computer program, when executed by a processor, implements the steps of the automatic face auditing method.
The technical scheme of the invention is an automatic face auditing method combining face detection, face quality analysis, face living body detection and face recognition technologies. By the method, whether the quality of the personal photos uploaded by the user is in compliance, whether the photos are real or not and whether the photos are integrated with one another or not are checked. In the technical scheme, whether the quality of the personal photo conforms or not mainly considers whether the face in the picture is high in quality or not and is convenient to identify; the living body detection mainly considers whether the photo is a copy or a forgery; whether the certificate photo and the personal photo are the same person or not is mainly considered. The invention realizes the full automation of the picture audit, does not need manual operation, reduces the labor cost, has stable algorithm and reduces the manual error. In addition, each invention uses a cascading picture filtering mode, and the speed is high.
Drawings
FIG. 1 is a flowchart illustrating steps of an automatic face audit method according to an embodiment of the present invention.
Detailed Description
The following further describes embodiments of the present invention with reference to the drawings. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Referring to fig. 1, the invention provides an automatic face auditing method, comprising the following steps:
and filtering pictures with resolution outside the threshold range, wherein the pictures with the vertical resolution lower than the threshold or the horizontal resolution lower than the threshold are not satisfied with the image resolution. In the embodiment of the invention, the photos with the vertical direction resolution lower than 640 or the horizontal direction resolution lower than 480 are filtered and deleted.
S10: and detecting the face, namely detecting the coordinates of a face rectangular frame and the coordinates of face key points through a cascading neural network algorithm.
And predicting the coordinates of the face frame and the coordinates of the face key points in the image by using a cascading neural network algorithm. The face frame coordinates refer to a rectangular face frame containing a face region; the coordinates of the key points of the human face refer to the positions of 106 key points of the face region of the human face, and eyebrows, glasses, a nose, a mouth and a facial contour of the face region of the human face are covered.
And calculating the size of the face frame according to the coordinates of the face frame. And when the face proportion is larger than the threshold value, the face proportion does not meet the requirement. In this embodiment, when the face proportion is greater than 0.4, it is determined that the proportion of the face in the entire image is too large.
Face ratio-face frame size/image size
And calculating the distance between the pupils of the two eyes according to the key points of the human face, namely the number of pixels between the centers of the two eyes. And when the interocular distance is smaller than the threshold value, the interocular distance does not meet the requirement. For example, when the distance between the left and right eyes is less than 40, the distance between the eyes is too small.
Aligning the human face: and aiming at each face, calculating a transformation matrix between the extracted face key point coordinates and the standard face key point coordinates, and applying the transformation matrix to the initial face image to obtain an aligned face image. The distribution of the aligned human face key point coordinates tends to be more consistent, and the human face is corrected.
S20: and (4) evaluating and screening the face quality, namely evaluating the face quality of the quality attributes of the plurality of face pictures and screening high-quality pictures.
The human face quality evaluation algorithm adopts a mode of combining deep learning with a traditional image analysis algorithm, and realizes the quality attributes of human face brightness, left and right face brightness difference, a human face angle (yaw) around the y-axis direction, a human face angle (pitch) around the x-axis direction, a human face angle (roll) around the z-axis direction, expression classification, glasses classification, mask classification, eye state classification, mouth state classification, makeup state classification, human face truth (the classification is stone statue, CG human face and real human face), human face ambiguity, human face shielding degree and the like according to the facial features of a human face image.
The traditional algorithm is adopted for the difference between the human face brightness and the left and right face brightness, specifically, RGB three channels of a human face image are converted into gray level images according to a certain proportion, each human face area is divided according to 106 key points of the human face, the human face brightness is calculated according to the gray level average value of the human face area, and the left and right face brightness difference is calculated according to the gray level average value of the left and right faces.
The other attributes are realized by adopting a deep learning method, a light-weight MobileFaceNet structure is adopted as a main body to construct a multi-task convolutional neural network, and a plurality of task outputs respectively correspond to each quality attribute of the human face. Wherein, the quality judgment of eye state, mouth state, makeup state, face shielding, mask classification and the like belongs to a classification task, and a softmax loss function is adopted as a target function; the human face posture angle, the image ambiguity and the like belong to a regression task, and an Euclidean loss function is adopted as a target function. The total objective function of the network training is the combination of a plurality of Softmax loss functions and Euclidean loss functions, and when a plurality of tasks are jointly learned, the total objective function is the linear combination of the plurality of loss functions.
Calculate Softmax loss L: l ═ log (p)i),
Wherein p isiNormalized probability calculated for each attribute class, i.e.
Figure BDA0002344033060000041
xiRepresenting the ith neuron output, and N representing the total number of categories;
calculate the Euclidean loss L:
Figure BDA0002344033060000042
wherein y isiIn order to be the true tag value,
Figure BDA0002344033060000043
is the predicted value of the regressor.
After the face quality evaluation, face quality screening is also needed, and the reference factors for screening include the following:
face ratio: and calculating the size of the face frame according to the face frame coordinates, wherein when the face ratio is greater than a threshold value, the face ratio does not meet the requirement. For example: when the face ratio is larger than 0.4, the proportion of the face in the whole image is too large.
Face brightness: the face brightness should be within a reasonable range. For example: the face brightness value is between 0 and 1, and the reasonable face brightness is more than 0.3 and less than 0.8.
Difference in left and right face brightness: the left and right face brightness difference should be less than a threshold. For example: when the left and right face brightness difference is between 0 and 1, the reasonable left and right face brightness difference should be less than 0.4.
Face pose: the face angle (yaw) around the y-axis, the face angle (pitch) around the x-axis, and the face angle (roll) around the z-axis should be within reasonable ranges. For example, within ± 10 °.
Ambiguity: the ambiguity should be less than a threshold. For example: when the ambiguity value is between 0 and 1, the face ambiguity should be less than 0.6.
Shielding: and if the face image is judged to have occlusion of five sense organs and outlines, including wearing sunglasses or a mask, filtering.
Expression: if the face image is judged to be an exaggerated expression, closed eyes and large mouth, filtering is carried out.
The degree of truth: the degree of reality should be greater than the threshold, if the degree of reality is slightly less, it indicates that the face may be a statue face/cartoon face, etc. For example: when the truth value is between 0 and 1, the human face truth value is more than 0.6.
And filtering out pictures which do not meet the quality requirement according to the requirement.
S30: and in the living body detection, a double-flow convolutional neural network is utilized to detect whether the picture is a real person or not, and the picture which is judged to be a non-real person is filtered.
The following two methods can be used in performing the in vivo detection:
the first in vivo detection process is as follows:
acquiring a depth image, and carrying out normalization processing on a face region in a picture to obtain a processed face depth image;
and inputting the RGB face image of the picture and the face depth image into a deep learning network for detection to obtain a living body judgment result of the picture.
Specifically, Resnet is used as a basic network for a deep learning network for living body judgment of pictures, the deep learning network adopts a double-input channel of a face image and a face depth image, after feature extraction is respectively carried out on two input branches, feature extraction fusion is carried out on the features extracted from the two branches through se-module, and feature extraction is carried out on the fused features through several layers of convolution to obtain a living body judgment result.
Specifically, the objective function of the deep learning network is the focal loss function.
Specifically, the actual depths of all points of the eyes and the mouth corners in the key points of the human face are calculated, the mean value of the actual depths of the points is calculated, the normalization upper limit is taken as the mean value plus a fixed value, the lower limit is the mean value minus the fixed value, and the depth of the human face area is normalized into a gray level image with the pixel value in the range of 0-255.
The gray value for the position where the actual depth is greater than the upper limit and less than the lower limit is set to 0.
Wherein, the normalization formula is:
Figure BDA0002344033060000051
v is a gray value after depth normalization, the range is 0-255, Dreal is the actual depth of the face area, Dmax is the upper limit of the actual depth of the face, and Dmin is the lower limit of the actual depth of the face.
The second in vivo detection procedure is:
intercepting a human face from an original image, converting an RGB channel into HSV and YCbCr spaces, and overlapping the converted HSV and YCbCr images to obtain an overlay image; extracting Sobel features from the face region through a Sobel operator to obtain a Sobel feature map;
and respectively inputting the Sobel characteristic diagram and the superposition diagram of the image from two input channels of the double-flow neural network to obtain a living body judgment result of the image.
Specifically, for each input image a, Gx, and Gy, respectively convolved with the image a to obtain AGx and AGy, and then an image AG is output, where the value of each pixel is:
Figure BDA0002344033060000052
where Gx denotes the convolution kernel in the x-direction and Gy denotes the convolution kernel in the y-direction.
S40: and (3) face comparison authentication, namely extracting a face characteristic vector of the picture, comparing the similarity between the face characteristic vector of the picture and the face characteristic vector of the standard picture, and outputting a comparison result.
Extracting a high-quality picture, outputting a 512-dimensional floating point vector by using a 50-layer ResNet neural network, and recording the 512-dimensional floating point vector as a face feature vector;
by comparing the similarity between the face feature vector of the current picture and the face feature vector of the standard picture, the formula is as follows:
Figure BDA0002344033060000061
wherein S isiFor the face feature vector of the current picture, SjThe face feature vector is a standard picture;
if the similarity degree is lower than the threshold value, the testimony of the people is judged to be not uniform; and if the similarity degree is higher than the threshold value, judging that the testimony is uniform.
The technical scheme of the invention is a face auditing method combining face detection, face quality analysis, face living body detection and face recognition technologies, and is used for checking whether the quality of personal photos uploaded by a user is in compliance, whether the photos are real or not and whether the photos are testified into one. Whether the quality of the personal photo conforms or not mainly considers whether the face in the picture is high in quality or not and is convenient to identify; the living body detection mainly considers whether the photo is a copy or a forgery; whether the certificate photo and the personal photo are the same person or not is mainly considered. The invention realizes the full automation of the picture audit, does not need manual operation, reduces the labor cost, has stable algorithm and reduces the manual error. In addition, each invention uses a cascading picture filtering mode, and the speed is high.
On the other hand, the invention also provides an automatic face auditing system, which comprises:
the face detection module detects a face rectangular frame and face key points through a cascading neural network algorithm;
the human face quality evaluation module is used for carrying out human face quality evaluation according to the quality attributes of the human face pictures and screening high-quality pictures;
the living body detection module detects whether the picture is a real person or not by using a double-flow convolutional neural network, and filters the picture without the real person;
and the face comparison module is used for extracting the face characteristic vector of the picture, comparing the similarity between the face characteristic vector of the picture and the face characteristic vector of the pre-stored certificate photo, and filtering the picture with low similarity.
In another aspect, the present invention further provides an automatic face audit device, which includes a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor implements the steps of the automatic face audit method when executing the computer program.
In another aspect, the present invention further provides a readable storage medium for automatic face auditing, wherein the readable storage medium stores a computer program, and the computer program, when executed by a processor, implements the steps of the automatic face auditing method.
The invention provides an automatic face auditing method and system based on a deep neural network aiming at websites and applications requiring uploaded images to meet certain standard requirements. The method can be effectively used in the information verification category, and realizes rapid face filtering and testimony comparison.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, and the scope of protection is still within the scope of the invention.

Claims (11)

1. An automatic face auditing method is characterized by comprising the following steps:
detecting the face, namely detecting the coordinates of a face rectangular frame and the coordinates of face key points through a cascading neural network algorithm;
evaluating and screening the quality of the human face, namely evaluating the quality of the human face of the quality attributes of a plurality of human face pictures and screening high-quality pictures;
detecting a living body, namely detecting whether the picture is a real person or not by using a double-flow convolutional neural network, and filtering the picture which is judged to be a non-real person;
and (3) face comparison authentication, namely extracting a face characteristic vector of the picture, comparing the similarity between the face characteristic vector of the picture and the face characteristic vector of the standard picture, and filtering the picture with low similarity.
2. The automatic face audit method of claim 1 wherein: before the face detection, the method also comprises the step of filtering the pictures with the resolution which is out of the threshold value range.
3. The automatic face audit method of claim 1 wherein: after the face detection, the method further comprises the following steps:
filtering out pictures with the proportion of the size of the face rectangular frame to the size of the pictures lower than a threshold value;
and filtering out pictures with the distance between two eyes of the human face lower than a threshold value.
4. The automatic face audit method of claim 1 further comprising, after face detection:
and aligning the face, calculating a transformation matrix between the coordinates of key points of the face of a picture and the coordinates of key points of a pre-stored standard face, and acting the transformation matrix on the picture to obtain an aligned face image.
5. The automatic face audit method according to any of claims 1 to 4, wherein the face comparison authentication process is:
extracting the high-quality picture, outputting a 512-dimensional floating point vector by using a 50-layer ResNet neural network, and recording the 512-dimensional floating point vector as a face feature vector;
by comparing the similarity between the face feature vector of the current picture and the face feature vector of the standard picture, the formula is as follows:
Figure FDA0002344033050000011
wherein S isiFor the face feature vector of the current picture, SjThe face feature vector is a standard picture;
if the similarity degree is lower than a threshold value, judging that the testimonies are not uniform; and if the similarity degree is higher than the threshold value, judging that the testimony is uniform.
6. The automatic face audit method according to any of claims 1 to 4, wherein the quality attributes used for face quality assessment include: the quality attributes used for the human face quality evaluation comprise human face posture, eye state, mouth state, makeup state, overall brightness, left and right face brightness difference, ambiguity and occlusion;
the face posture, the eye state, the mouth state, the makeup state, the ambiguity and the shielding all adopt a MobileFaceNet structure as a main body to construct a multitask convolution neural network, and a plurality of task outputs respectively correspond to each quality attribute of the face;
eye state, mouth state, makeup state and face shielding are classified tasks, and a softmax loss function is adopted as a target function;
the human face posture, the image illuminance and the image fuzziness are regression tasks, and an Euclidean loss function is adopted as a target function;
the total objective function of the network training comprises a combination of a plurality of Softmax loss functions and Euclidean loss functions, and when a plurality of tasks are jointly learned, the total objective function is a linear combination of the plurality of loss functions.
7. The automatic face audit method according to any of claims 1 to 4, wherein the process of in vivo detection is as follows:
acquiring a depth image, and carrying out normalization processing on a face region in a picture to obtain a processed face depth image;
and inputting the RGB face image of the picture and the face depth image into a deep learning network for detection to obtain a living body judgment result of the picture.
8. The automatic face audit method according to any of claims 1 to 4, wherein the process of in vivo detection is as follows:
intercepting a human face from an original image, converting an RGB channel into HSV and YCbCr spaces, and overlapping the converted HSV and YCbCr images to obtain an overlay image; extracting Sobel features from the face region through a Sobel operator to obtain a Sobel feature map;
and respectively inputting the Sobel characteristic diagram and the superposition diagram of the image from two input channels of the double-flow neural network to obtain a living body judgment result of the image.
9. An automatic face audit system, comprising:
the face detection module detects a face rectangular frame and face key points through a cascading neural network algorithm;
the human face quality evaluation module is used for carrying out human face quality evaluation according to the quality attributes of the human face pictures and screening high-quality pictures;
the living body detection module detects whether the picture is a real person or not by using a double-flow convolutional neural network, and filters the picture without the real person;
and the face comparison module is used for extracting the face characteristic vector of the picture, comparing the similarity between the face characteristic vector of the picture and the face characteristic vector of the pre-stored certificate photo, and filtering the picture with low similarity.
10. An automatic face audit device comprising a memory, a processor and a computer program stored on the memory and operable on the processor, wherein: the processor, when executing the program, implements the steps of the automatic face audit method according to any of claims 1-8.
11. A readable storage medium having stored thereon a computer program for automatic face auditing, characterized in that: the computer program when executed by a processor implements the steps of the automatic face audit method according to any of claims 1-8.
CN201911387766.9A 2019-12-30 2019-12-30 Automatic face auditing method, system, equipment and readable storage medium Active CN111222433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911387766.9A CN111222433B (en) 2019-12-30 2019-12-30 Automatic face auditing method, system, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911387766.9A CN111222433B (en) 2019-12-30 2019-12-30 Automatic face auditing method, system, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111222433A true CN111222433A (en) 2020-06-02
CN111222433B CN111222433B (en) 2023-06-20

Family

ID=70829143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911387766.9A Active CN111222433B (en) 2019-12-30 2019-12-30 Automatic face auditing method, system, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111222433B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753923A (en) * 2020-07-02 2020-10-09 携程计算机技术(上海)有限公司 Intelligent photo album clustering method, system, equipment and storage medium based on human face
CN112200108A (en) * 2020-10-16 2021-01-08 深圳市华付信息技术有限公司 Mask face recognition method
CN112329638A (en) * 2020-11-06 2021-02-05 上海优扬新媒信息技术有限公司 Image scoring method, device and system
CN112528939A (en) * 2020-12-22 2021-03-19 广州海格星航信息科技有限公司 Quality evaluation method and device for face image
CN113282894A (en) * 2021-01-26 2021-08-20 上海欧冶金融信息服务股份有限公司 Identity verification method and system for wind-control full-pitch
CN113807144A (en) * 2020-06-15 2021-12-17 福建新大陆支付技术有限公司 Testing method of living body detection equipment
CN114093004A (en) * 2021-11-25 2022-02-25 成都智元汇信息技术股份有限公司 Face fusion comparison method and device based on multiple cameras

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622588A (en) * 2012-03-08 2012-08-01 无锡数字奥森科技有限公司 Dual-certification face anti-counterfeit method and device
CN104143086A (en) * 2014-07-18 2014-11-12 吴建忠 Application technology of portrait comparison to mobile terminal operating system
CN108280399A (en) * 2017-12-27 2018-07-13 武汉普利商用机器有限公司 A kind of scene adaptive face identification method
CN109815826A (en) * 2018-12-28 2019-05-28 新大陆数字技术股份有限公司 The generation method and device of face character model
WO2019100608A1 (en) * 2017-11-21 2019-05-31 平安科技(深圳)有限公司 Video capturing device, face recognition method, system, and computer-readable storage medium
CN110580445A (en) * 2019-07-12 2019-12-17 西北工业大学 Face key point detection method based on GIoU and weighted NMS improvement

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622588A (en) * 2012-03-08 2012-08-01 无锡数字奥森科技有限公司 Dual-certification face anti-counterfeit method and device
CN104143086A (en) * 2014-07-18 2014-11-12 吴建忠 Application technology of portrait comparison to mobile terminal operating system
WO2019100608A1 (en) * 2017-11-21 2019-05-31 平安科技(深圳)有限公司 Video capturing device, face recognition method, system, and computer-readable storage medium
CN108280399A (en) * 2017-12-27 2018-07-13 武汉普利商用机器有限公司 A kind of scene adaptive face identification method
CN109815826A (en) * 2018-12-28 2019-05-28 新大陆数字技术股份有限公司 The generation method and device of face character model
CN110580445A (en) * 2019-07-12 2019-12-17 西北工业大学 Face key point detection method based on GIoU and weighted NMS improvement

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807144A (en) * 2020-06-15 2021-12-17 福建新大陆支付技术有限公司 Testing method of living body detection equipment
CN111753923A (en) * 2020-07-02 2020-10-09 携程计算机技术(上海)有限公司 Intelligent photo album clustering method, system, equipment and storage medium based on human face
CN112200108A (en) * 2020-10-16 2021-01-08 深圳市华付信息技术有限公司 Mask face recognition method
CN112329638A (en) * 2020-11-06 2021-02-05 上海优扬新媒信息技术有限公司 Image scoring method, device and system
CN112528939A (en) * 2020-12-22 2021-03-19 广州海格星航信息科技有限公司 Quality evaluation method and device for face image
CN112528939B (en) * 2020-12-22 2024-09-06 广州海格星航信息科技有限公司 Quality evaluation method and device for face image
CN113282894A (en) * 2021-01-26 2021-08-20 上海欧冶金融信息服务股份有限公司 Identity verification method and system for wind-control full-pitch
CN114093004A (en) * 2021-11-25 2022-02-25 成都智元汇信息技术股份有限公司 Face fusion comparison method and device based on multiple cameras

Also Published As

Publication number Publication date
CN111222433B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
CN111222433B (en) Automatic face auditing method, system, equipment and readable storage medium
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
KR102554724B1 (en) Method for identifying an object in an image and mobile device for practicing the method
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN108090511B (en) Image classification method and device, electronic equipment and readable storage medium
CN109871845B (en) Certificate image extraction method and terminal equipment
CN109858439A (en) A kind of biopsy method and device based on face
CN109952594A (en) Image processing method, device, terminal and storage medium
CN103902958A (en) Method for face recognition
CN1975759A (en) Human face identifying method based on structural principal element analysis
US20120269443A1 (en) Method, apparatus, and program for detecting facial characteristic points
CN111091075A (en) Face recognition method and device, electronic equipment and storage medium
CN109522883A (en) A kind of method for detecting human face, system, device and storage medium
CN108830175A (en) Iris image local enhancement methods, device, equipment and storage medium
CN112396050B (en) Image processing method, device and storage medium
CN111209820A (en) Face living body detection method, system, equipment and readable storage medium
CN106570447A (en) Face photo sunglass automatic removing method based on gray histogram matching
US12131576B2 (en) Method for verifying the identity of a user by identifying an object within an image that has a biometric characteristic of the user and separating a portion of the image comprising the biometric characteristic from other portions of the image
CN117496019B (en) Image animation processing method and system for driving static image
CN113436735A (en) Body weight index prediction method, device and storage medium based on face structure measurement
Devadethan et al. Face detection and facial feature extraction based on a fusion of knowledge based method and morphological image processing
CN115019364A (en) Identity authentication method and device based on face recognition, electronic equipment and medium
Sablatnig et al. Structural analysis of paintings based on brush strokes
CN113837018B (en) Cosmetic progress detection method, device, equipment and storage medium
CN112800941B (en) Face anti-fraud method and system based on asymmetric auxiliary information embedded network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant