CN106951869B - A kind of living body verification method and equipment - Google Patents

A kind of living body verification method and equipment Download PDF

Info

Publication number
CN106951869B
CN106951869B CN201710175495.5A CN201710175495A CN106951869B CN 106951869 B CN106951869 B CN 106951869B CN 201710175495 A CN201710175495 A CN 201710175495A CN 106951869 B CN106951869 B CN 106951869B
Authority
CN
China
Prior art keywords
parameter
image data
threshold
feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710175495.5A
Other languages
Chinese (zh)
Other versions
CN106951869A (en
Inventor
熊鹏飞
王汉杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710175495.5A priority Critical patent/CN106951869B/en
Publication of CN106951869A publication Critical patent/CN106951869A/en
Application granted granted Critical
Publication of CN106951869B publication Critical patent/CN106951869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a kind of living body verification method and equipment.The described method includes: obtaining the first image data, the first image data are parsed, obtain the textural characteristics of the first image data;The textural characteristics characterize at least one of following attributive character: the fuzzy characteristics of the first image data, the retroreflective feature of the first image data, the first image data bounding box features;The corresponding first kind parameter of the textural characteristics is obtained based on disaggregated model;The second class parameter corresponding with the textural characteristics in the first image data is obtained based on statistical disposition mode;The second class parameter is different from the first kind parameter;When the first kind parameter is greater than first kind threshold value and the second class parameter is greater than the second class threshold value, fusion parameters are determined based on the first kind parameter and the second class parameter;When the fusion parameters are greater than third class threshold value, determine that living body is verified.

Description

Method and equipment for activating experience certification
Technical Field
The invention relates to a face recognition technology, in particular to a method and equipment for a verification of a lively experience.
Background
The existing passive in vivo verification methods are mainly divided into three categories: motion-based methods, device-based methods, and texture-based methods. The motion-based method mainly judges whether three-dimensional depth change exists by analyzing an image background or unconscious behaviors of a user, so as to distinguish a photo from a real person. The device-based method detects the difference between a real face and a photo/video image from face images acquired under different light sources or light intensities. The method is based on the fact that the light reflection degree of a real face to a light source is different from the light reflection degree of a photo/video to the light source. Texture-based methods are classified directly by analyzing certain types of image features of the image.
The three methods all have some disadvantages: motion-based methods still require some turning or sideways facial motion by the user, are not completely passive verification, and are indistinguishable from video. Although the method based on the equipment can achieve better effect, the method depends on the equipment seriously, and the expandability is not strong. The image texture-based method has difficulty in describing different attack samples by using a single image feature. For example, frequency domain analysis is ineffective for high definition images, reflectivity is ineffective for non-reflective images taken in dark light, and the like.
Disclosure of Invention
In order to solve the existing technical problem, the embodiment of the invention provides a method and equipment for verifying a liveness experience.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
the embodiment of the invention provides a method for verifying a lively experience, which comprises the following steps:
acquiring first image data, analyzing the first image data, and acquiring texture features of the first image data; the texture features characterize at least one of the following attribute features: the fuzzy characteristic of the first image data, the light reflection characteristic of the first image data and the frame characteristic of the first image data;
obtaining a first class parameter corresponding to the texture feature based on a classification model; and the number of the first and second groups,
obtaining a second type of parameters corresponding to the texture features in the first image data based on a statistical processing mode; the second type of parameter is different from the first type of parameter;
when the first type parameter is larger than a first type threshold value and the second type parameter is larger than a second type threshold value, determining a fusion parameter based on the first type parameter and the second type parameter;
and when the fusion parameter is larger than the third type threshold value, determining that the living body verification is passed.
In the above scheme, the method further comprises: and when the first type parameter is judged not to be larger than the first type threshold, or the second type parameter is not larger than the second type threshold, or the fusion parameter is not larger than the third type threshold, determining that the living body verification is not passed.
In the foregoing solution, the obtaining the texture feature of the first image data includes:
respectively obtaining a first texture feature, a second texture feature and a third texture feature of the first image data; the first texture feature characterizes a blur feature of the first image data; the second texture features characterize reflective features of the first image data; the third texture features characterize border features of the first image data;
the obtaining of the first class parameters corresponding to the texture features based on the pre-configured classification model includes: obtaining a first parameter corresponding to the first texture feature based on a first pre-configured classification model, obtaining a second parameter corresponding to the second texture feature based on a second pre-configured classification model, and obtaining a third parameter corresponding to the third texture feature based on a third pre-configured classification model.
The counting of the second type of parameters corresponding to the texture features in the first image data includes:
and counting a fourth parameter corresponding to the first texture feature, a fifth parameter corresponding to the second texture feature and a sixth parameter corresponding to the third texture feature in the first image data.
In the foregoing solution, when it is determined that the first-class parameter is greater than the first-class threshold and the second-class parameter is greater than the second-class threshold, determining a fusion parameter based on the first-class parameter and the second-class parameter includes:
determining a fusion parameter based on the first parameter, the second parameter, the third parameter, the fourth parameter, the fifth parameter and the sixth parameter when it is determined that the first parameter is greater than a first threshold, the second parameter is greater than a second threshold, the third parameter is greater than a third threshold, the fourth parameter is greater than a fourth threshold, the fifth parameter is greater than a fifth threshold and the sixth parameter is greater than a sixth threshold.
In the foregoing solution, when it is determined that the first type parameter is not greater than the first type threshold, or the second type parameter is not greater than the second type threshold, or the fusion parameter is not greater than the third type threshold, determining that the in vivo authentication is not passed includes:
determining that the in vivo authentication is not passed when the first parameter is not greater than the first threshold, or the second parameter is not greater than the second threshold, or the third parameter is not greater than the third threshold, or the fourth parameter is not greater than the fourth threshold, or the fifth parameter is not greater than the fifth threshold, or the sixth parameter is not greater than the sixth threshold, or the fusion parameter is not greater than the third type threshold.
In the foregoing solution, the obtaining the first texture feature of the first image data includes:
converting the first image data into Hue Saturation (HSV) model data; performing Local Binary Pattern (LBP) processing on the HSV model data, respectively obtaining first LBP characteristic data corresponding to hue data, second LBP characteristic data corresponding to saturation data and third LBP characteristic data corresponding to brightness data, and taking the first LBP characteristic data, the second LBP characteristic data and the third LBP characteristic data as the first texture characteristic.
In the foregoing solution, obtaining the second texture feature of the first image data includes:
extracting the light reflection feature of the first image data, extracting the color histogram feature of the first image data, and taking the light reflection feature and the color histogram feature as the second texture feature;
wherein the extracting of the reflective feature of the first image data includes: obtaining a reflectivity image of the first image data, and obtaining a reflection image based on the first image data and the reflectivity image; and carrying out blocking processing on the reflectivity image to obtain image blocking gray scale statistical parameters as the light reflection characteristics.
In the foregoing solution, obtaining a third texture feature of the first image data includes:
filtering the first image data to obtain first edge image data of the first image data;
and carrying out LBP processing on the first edge image data to obtain fourth LBP characteristic data representing the third texture characteristic.
In the foregoing solution, the counting a fourth parameter corresponding to the first texture feature in the first image data includes:
performing Gaussian filtering processing on the first image data to obtain Gaussian image data of the first image data;
obtaining difference image data based on the first image data and the Gaussian image data, and obtaining gradient information of the difference image data as the fourth parameter.
In the foregoing solution, the counting a fifth parameter corresponding to the second texture feature in the first image data includes:
obtaining a reflected light image of the first image data; and carrying out binarization processing on the reflective image, partitioning the reflective image based on the image subjected to binarization processing, counting a first proportional relation of an area with brightness meeting a preset threshold value in each partitioned image in the corresponding partitioned image, and calculating the sum of the first proportional relations corresponding to all the partitioned images to serve as the fifth parameter.
In the foregoing solution, the counting a sixth parameter corresponding to the third texture feature in the first image data includes:
identifying a region where a human face in the first image data is located;
performing edge detection processing on the first image data to obtain second edge image data, and identifying a first straight line with a length meeting a first preset condition in the second edge image data;
and extracting a second straight line which is positioned outside the region where the face is positioned in the first straight line and has a slope meeting a second preset condition, and counting the number of the second straight lines to serve as the sixth parameter.
In the foregoing solution, the determining a fusion parameter based on the first parameter, the second parameter, the third parameter, the fourth parameter, the fifth parameter, and the sixth parameter includes:
respectively obtaining a first weight coefficient corresponding to the first parameter, a second weight coefficient corresponding to the second parameter, a third weight coefficient corresponding to the third parameter, a fourth weight coefficient corresponding to the fourth parameter, a fifth weight coefficient corresponding to the fifth parameter and a sixth weight coefficient corresponding to the sixth parameter by adopting a machine learning algorithm in advance;
obtaining a first product of the first parameter and the first weight coefficient, a second product of the second parameter and the second weight coefficient, a third product of the third parameter and the third weight coefficient, a fourth product of the fourth parameter and the fourth weight coefficient, a fifth product of the fifth parameter and the fifth weight coefficient, and a sixth product of the sixth parameter and the sixth weight coefficient;
and adding the first product, the second product, the third product, the fourth product, the fifth product and the sixth product to obtain the fusion parameter.
The embodiment of the invention also provides a device for the experience of the activity, which comprises: the device comprises an analysis unit, a classification unit, a statistic unit and a fusion unit; wherein,
the analysis unit is used for obtaining first image data and analyzing the first image data;
the classification unit is used for obtaining the texture features of the first image data; the texture features characterize at least one of the following attribute features: the fuzzy characteristic of the first image data, the light reflection characteristic of the first image data and the frame characteristic of the first image data; obtaining a first class parameter corresponding to the texture feature based on a classification model;
the statistical unit is used for obtaining a second type of parameters corresponding to the texture features in the first image data based on a statistical processing mode; the second type of parameter is different from the first type of parameter;
the fusion unit is used for judging whether the first type parameter is larger than a first type threshold value or not and judging whether the second type parameter is larger than a second threshold value or not; when the first type parameter is larger than a first type threshold value and the second type parameter is larger than a second type threshold value, determining a fusion parameter based on the first type parameter and the second type parameter; and when the fusion parameter is larger than the third type threshold value, determining that the living body verification is passed.
In the foregoing solution, the fusion unit is further configured to determine that the living body verification fails when it is determined that the first type parameter is not greater than the first type threshold, or the second type parameter is not greater than the second type threshold, or the fusion parameter is not greater than the third type threshold.
In the foregoing solution, the classifying unit is configured to obtain a first texture feature, a second texture feature, and a third texture feature of the first image data, respectively; the first texture feature characterizes a degree of blur of the first image data; the second texture features represent the degree of light reflection of the first image data; the third texture feature represents whether the first image data contains a frame or not; the system is further used for obtaining a first parameter corresponding to the first texture feature based on a first pre-configured classification model, obtaining a second parameter corresponding to the second texture feature based on a second pre-configured classification model, and obtaining a third parameter corresponding to the third texture feature based on a third pre-configured classification model;
the statistical unit is configured to perform statistics on a fourth parameter corresponding to the first texture feature, a fifth parameter corresponding to the second texture feature, and a sixth parameter corresponding to the third texture feature in the first image data.
In the foregoing solution, the fusion unit is configured to determine a fusion parameter based on the first parameter, the second parameter, the third parameter, the fourth parameter, the fifth parameter, and the sixth parameter when it is determined that the first parameter is greater than a first threshold, the second parameter is greater than a second threshold, the third parameter is greater than a third threshold, the fourth parameter is greater than a fourth threshold, the fifth parameter is greater than a fifth threshold, and the sixth parameter is greater than a sixth threshold.
In the foregoing solution, the fusion unit is further configured to determine that the living body verification fails when the first parameter is not greater than the first threshold, or the second parameter is not greater than the second threshold, or the third parameter is not greater than the third threshold, or the fourth parameter is not greater than the fourth threshold, or the fifth parameter is not greater than the fifth threshold, or the sixth parameter is not greater than the sixth threshold, or the fusion parameter is not greater than the third threshold.
In the above scheme, the classification unit is configured to convert the first image data into HSV model data; carrying out LBP processing on the HSV model data, respectively obtaining first LBP characteristic data corresponding to hue data, second LBP characteristic data corresponding to saturation data and third LBP characteristic data corresponding to brightness data, and taking the first LBP characteristic data, the second LBP characteristic data and the third LBP characteristic data as the first texture characteristic.
In the foregoing solution, the classifying unit is configured to extract a light reflection feature of the first image data, extract a color histogram feature of the first image data, and use the light reflection feature and the color histogram feature as the second texture feature;
the classification unit is used for obtaining a reflectivity image of the first image data and obtaining a reflection image based on the first image data and the reflectivity image; and carrying out blocking processing on the reflectivity image to obtain image blocking gray scale statistical parameters as the light reflection characteristics.
In the foregoing solution, the classifying unit is configured to perform filtering processing on the first image data to obtain first edge image data of the first image data; and carrying out LBP processing on the first edge image data to obtain fourth LBP characteristic data representing the third texture characteristic.
In the foregoing solution, the statistical unit is configured to perform gaussian filtering processing on the first image data to obtain gaussian image data of the first image data; obtaining difference image data based on the first image data and the Gaussian image data, and obtaining gradient information of the difference image data as the fourth parameter.
In the above scheme, the statistical unit is configured to obtain a reflection image of the first image data; and carrying out binarization processing on the reflective image, partitioning the reflective image based on the image subjected to binarization processing, counting a first proportional relation of an area with brightness meeting a preset threshold value in each partitioned image in the corresponding partitioned image, and calculating the sum of the first proportional relations corresponding to all the partitioned images to serve as the fifth parameter.
In the foregoing solution, the statistical unit is configured to identify a region where a face in the first image data is located; performing edge detection processing on the first image data to obtain second edge image data, and identifying a first straight line with a length meeting a first preset condition in the second edge image data; and extracting a second straight line which is positioned outside the region where the face is positioned in the first straight line and has a slope meeting a second preset condition, and counting the number of the second straight lines to serve as the sixth parameter.
In the foregoing solution, the fusion unit is configured to respectively obtain, by using a machine learning algorithm in advance, a first weight coefficient corresponding to the first parameter, a second weight coefficient corresponding to the second parameter, a third weight coefficient corresponding to the third parameter, a fourth weight coefficient corresponding to the fourth parameter, a fifth weight coefficient corresponding to the fifth parameter, and a sixth weight coefficient corresponding to the sixth parameter; obtaining a first product of the first parameter and the first weight coefficient, a second product of the second parameter and the second weight coefficient, a third product of the third parameter and the third weight coefficient, a fourth product of the fourth parameter and the fourth weight coefficient, a fifth product of the fifth parameter and the fifth weight coefficient, and a sixth product of the sixth parameter and the sixth weight coefficient; and adding the first product, the second product, the third product, the fourth product, the fifth product and the sixth product to obtain the fusion parameter.
The embodiment of the invention provides a living body verification method and equipment, wherein the method comprises the following steps: obtaining first image data and analyzing the first image data; obtaining texture features of the first image data; the texture features characterize at least one of the following attribute features: the blurring degree of the first image data, the light reflection degree of the first image data and whether the first image data contains a frame or not; obtaining a first type parameter corresponding to the texture feature based on a pre-configured classification model; counting second type parameters corresponding to the texture features in the first image data; determining a fusion parameter based on the first type parameter and the second type parameter when the first type parameter is judged to be larger than a first type threshold value and the second type parameter is judged to be larger than a second type threshold value; and when the fusion parameter is larger than the third type threshold value, determining that the living body verification is passed. By adopting the technical scheme of the embodiment of the invention, various texture features are extracted, on one hand, a first type of parameters are obtained in a classification model clustering mode and threshold judgment is carried out, on the other hand, a second type of parameters corresponding to the texture features in image data are counted in a feature distribution statistical mode and threshold judgment is carried out, and finally, in-vivo verification is realized in a mode of fusing the first type of parameters and the second type of parameters.
Drawings
FIG. 1 is a schematic flowchart of an embodiment of a method for in-vivo authentication;
FIG. 2 is a first flowchart illustrating an in-vivo authentication method according to an embodiment of the present invention;
FIGS. 3a to 3d are schematic diagrams illustrating a conventional in vivo attack source;
FIGS. 4a to 4c are schematic views illustrating a processing procedure of a first texture feature in a living body verification method according to an embodiment of the invention;
FIGS. 5a and 5b are schematic diagrams of a first texture feature in a liveness verification method according to an embodiment of the invention;
FIGS. 6a and 6b are schematic diagrams of a second texture feature in a liveness verification method according to an embodiment of the invention;
FIG. 7 is a diagram illustrating a third texture feature in a method of in vivo authentication in accordance with an embodiment of the present invention;
FIG. 8 is a flowchart illustrating a second method for in-vivo authentication according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating an effect curve of an in-vivo authentication method according to an embodiment of the present invention;
FIG. 10 is a schematic diagram showing the constitution of a living body authentication device according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a configuration of a living body authentication device as hardware according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Before describing in detail the living body authentication method of the embodiment of the present invention, a general implementation of the living body authentication scheme of the embodiment of the present invention will be explained first. FIG. 1 is a schematic flowchart of an embodiment of a method for in-vivo authentication; as shown in fig. 1, the living body verification method of the embodiment of the present invention may include the following stages:
stage 1: the input video stream, i.e., the liveness verification device, obtains the image data.
And (2) stage: the living body verification device performs face detection.
And (3) stage: and (4) in-vivo detection, after the detection result shows that the living body exists, entering a stage 4: sending the image data to a background for face verification; and after the detection result shows that the living body is not the living body, re-entering the living body detection stage. The specific implementation process of the living body detection can be shown with reference to the following description of the living body verification method provided by the embodiment of the invention.
The embodiment of the invention provides a method for verifying a live experience. FIG. 2 is a first flowchart illustrating an in-vivo authentication method according to an embodiment of the present invention; as shown in fig. 2, the method includes:
step 101: acquiring first image data, analyzing the first image data, and acquiring texture features of the first image data; the texture features characterize at least one of the following attribute features: the image processing device comprises a fuzzy feature of the first image data, a light reflection feature of the first image data and a frame feature of the first image data.
Step 102: and obtaining a first class parameter corresponding to the texture feature based on a classification model.
Step 103: obtaining a second type of parameters corresponding to the texture features in the first image data based on a statistical processing mode; the second type of parameters is different from the first type of parameters.
Step 104: and when the first type parameter is larger than a first type threshold value and the second type parameter is larger than a second type threshold value, determining a fusion parameter based on the first type parameter and the second type parameter.
Step 105: and when the fusion parameter is larger than the third type threshold value, determining that the living body verification is passed.
As an embodiment, the method further comprises: and when the first type parameter is judged not to be larger than the first type threshold value or the second type parameter is judged not to be larger than the second type threshold value, determining that the living body verification is not passed.
The living body verification method provided by the embodiment of the invention is applied to living body verification equipment. The living body verification device may specifically be an electronic device having an image acquisition unit to obtain image data by the image acquisition unit; the electronic device may specifically be a mobile device such as a mobile phone and a tablet computer, or may also be a personal computer, an access control device configured with an access control system (specifically, a system for controlling an access passage), and the like; the image acquisition unit may be a camera disposed on the electronic device.
In this embodiment, a living body verification device (in the following embodiments of the present invention, the living body verification device is simply referred to as a device) obtains image data by an image acquisition unit, and then analyzes the image data to obtain texture features of the first image data; wherein the obtained image data includes a plurality of frame images.
In general, the sources of the face impersonating a living body to pass the living body verification (this way may be called attack) mainly include: print photos, photos displayed on a display/screen, displayed video, and the like. FIGS. 3a to 3d are schematic diagrams illustrating a conventional in vivo attack source; specifically, as shown in fig. 3a to 3d, these types of images are analyzed, and it can be concluded that the different types of images have different characteristics, for example, a printed photograph usually includes a border; the image displayed on the display screen or display will typically have moire patterns, will be less sharp than the image containing the real person, will be reflective, etc. Of course, the above features are not limited to a single source of attack samples. Therefore, the present embodiment obtains texture features of the first image data based on the above-mentioned several attack characteristics.
As an embodiment, the obtaining the texture feature of the first image data includes: respectively obtaining a first texture feature, a second texture feature and a third texture feature of the first image data; the first texture feature characterizes a blur feature of the first image data; the second texture features characterize reflective features of the first image data; the third texture features characterize border features of the first image data; correspondingly, the obtaining of the first class parameters corresponding to the texture features based on the preconfigured classification model includes: obtaining a first parameter corresponding to the first texture feature based on a first pre-configured classification model, obtaining a second parameter corresponding to the second texture feature based on a second pre-configured classification model, and obtaining a third parameter corresponding to the third texture feature based on a third pre-configured classification model. Correspondingly, the counting the second type of parameters corresponding to the texture features in the first image data includes: and counting a fourth parameter corresponding to the first texture feature, a fifth parameter corresponding to the second texture feature and a sixth parameter corresponding to the third texture feature in the first image data.
Specifically, in this embodiment, the obtaining the first texture feature of the first image data includes: converting the first image data into HSV model data; carrying out LBP processing on the HSV model data, respectively obtaining first LBP characteristic data corresponding to hue data, second LBP characteristic data corresponding to saturation data and third LBP characteristic data corresponding to brightness data, and taking the first LBP characteristic data, the second LBP characteristic data and the third LBP characteristic data as the first texture characteristic.
In this embodiment, the first texture feature represents a blur feature of the first image data; the blur feature may specifically be a feature representing a blur degree of the first image data, that is, the blur feature may specifically be a feature that is presented when a sharpness degree of a texture and a boundary in the first image data does not meet a preset requirement; in one embodiment, the fuzzy feature may be specifically represented by an LBP feature.
Specifically, the first image data may specifically be red, green and blue (RGB) image data; the RGB data is converted into HSV model data, and H model data representing hue, S model data representing saturation, and V model data representing lightness can be obtained, respectively. And carrying out LBP processing on the H model data, the S model data and the V model data respectively so as to obtain image gradient information in the H model data, the S model data and the V model data. Taking H model data as an example, performing gray scale processing on the H model data to obtain a gray scale image of the H model data, and further determining a relative gray scale relationship between each feature point in the gray scale image and eight adjacent feature points, as shown in fig. 4a, the relative gray scale relationship is a gray scale image of a feature point matrix of three times three, and the gray scale of each feature point is shown in fig. 4a, for example; the gray scale value of each feature point is expressed in a numerical manner, which can be specifically shown in fig. 4 b. Further, comparing the gray levels of the eight adjacent feature points with the gray level of the central feature point, and if the gray levels of the eight adjacent feature points are greater than the gray level of the central feature point, recording the value of the eight adjacent feature points as 1; conversely, if the gray scale of the adjacent feature point is less than or equal to the gray scale of the central feature point, the value of the adjacent feature point is recorded as 0, which can be specifically seen in fig. 4 c. Further, concatenating the values of adjacent feature points results in an 8-bit binary string, which can be understood as a gray value distributed over (0, 255). In a specific implementation process, as shown in fig. 4c, if the first feature point at the top left corner is used as a starting feature point and arranged clockwise, the obtained 8-bit character string is 10001111. A binary string corresponding to each feature point (i.e., the center feature point) in the process image may thus be obtained. Further, in order to remove redundancy, counting binary strings with 0 and 1 changes smaller than 2 in the binary strings corresponding to each feature point; for example, in 10001111 of the character string, the first and second bits 0 and 1 are changed 1 time, the fourth and fifth bits 0 and 1 are changed 1 time, and the change is performed twice in total, and the condition of "0 and 1 are changed less than 2" is not satisfied. For another example, in 00001111, the character string is changed by 0 and 1 times only from the fourth and fifth bits, and the condition of "0 and 1 change less than 2" is satisfied. Then, mapping the statistical binary character string into a range of (0, 58), wherein the mapped data can be used as first LBP characteristic data corresponding to tone data; this also greatly reduces the amount of data processing.
The above processing procedure can be specifically realized by the following expression:
LBP=[code0,code1,……,code7] (1)
code(m,n)=Img(y+m,x+n)>Img(y,x)?1:0 (2)
wherein LBP in expression (1) represents a relative relationship between a display parameter of a certain feature point in the first image data and a display parameter of an adjacent feature point; the feature point is any feature point in the first image data; code0, code1, … … and code7 respectively represent display parameters of feature points adjacent to the feature points; in one embodiment, the display parameter may be a gray scale value, but may also be other display parameters. Expression (2) represents that the grayscale value of the feature point (y + m, x + n) is compared with the grayscale value of the feature point (y, x), and if the grayscale value of the feature point (y + m, x + n) is greater than the grayscale value of the feature point (y, x), the binary character string code (m, n) of the feature point (m, n) is recorded as 1, otherwise, it is recorded as 0.
Similarly, the second LBP feature data and the third LBP feature data may also be obtained by the above data method, and are not described herein again. Further, after the obtained first LBP feature data, second LBP feature data and third LBP feature data are concatenated to form the first texture feature, it may be understood that three 59-dimensional LBP feature data (including the first LBP feature data, the second LBP feature data and the third LBP feature data) are sequentially concatenated. FIGS. 5a and 5b are schematic diagrams of a first texture feature in a liveness verification method according to an embodiment of the invention; FIG. 5a is a first texture feature extracted from image data previously determined to be a live face; fig. 5b shows a first texture feature extracted from image data of a face determined in advance to be a non-living body.
In this embodiment, obtaining the second texture feature of the first image data includes: extracting the light reflection feature of the first image data, extracting the color histogram feature of the first image data, and taking the light reflection feature and the color histogram feature as the second texture feature; wherein the extracting of the reflective feature of the first image data includes: obtaining a reflectivity image of the first image data, and obtaining a reflection image based on the first image data and the reflectivity image; and carrying out blocking processing on the reflectivity image to obtain image blocking gray scale statistical parameters as the light reflection characteristics.
In this embodiment, the second texture feature represents a reflective feature of the first image data; the light reflection feature may specifically be a feature representing a highlight region distribution and an image chromaticity distribution in the first image data. Specifically, the second texture features characterizing the light reflecting features include two types: one is a feature describing the distribution of highlight regions of an image, wherein the highlight regions may be regions where the brightness parameter reaches a preset threshold; the other type corresponds to the color chromaticity distribution caused by the difference of the image reflectivity, namely the color histogram feature. Since an image (for example, a print photograph included in an attack mode of a non-living human face, an image displayed on a display/display screen, and the like can be understood as a secondary shot) in the secondary shot is approximately planar, and the material is different from that of a real human face, the color is easily changed. Specifically, as first image data of an RGB image, a reflectance image of the first image data is obtained in an RGB color space, and a reflection image is obtained based on the first image data and the reflectance image; specifically, the reflection image is a difference between the first image data and the reflectance image thereof. Wherein the reflectance image can be acquired as shown by the following expression:
Spect(y,x)=(1-max(max(r(y,x)*t,g(y,x)*t),b(y,x)*t))*255 (3)
t=1.0/(r(y,x)+g(y,x)+b(y,x)) (4)
wherein Spect (y, x) represents reflectance data of a feature point (y, x) in the first image data; r (y, x) represents data of the feature point (y, x) corresponding to the red channel in the RGB color space; g (y, x) represents data of the feature point (y, x) corresponding to the green channel in the RGB color space; b (y, x) represents data of the feature point (y, x) corresponding to the blue color channel in the RGB color space.
Further, blocking the reflective image, and selecting the mean value (mean) and the variance (delta) of the image blocks as reflective characteristics; since the reflectance image is specifically a grayscale image, mean and delta of the image squares are specifically represented by mean and delta of the grayscale values. FIGS. 6a and 6b are schematic diagrams of a second texture feature in a liveness verification method according to an embodiment of the invention; FIG. 6a is a left image of the collected image data corresponding to the non-living human face, and a right image of the collected image data is a reflected light image obtained after the image data is processed; fig. 6b shows the left image of the acquired image data corresponding to the face of the living body, and the right image of the acquired image data is the reflected light image obtained after the image data is processed.
For the color histogram feature, the H model data representing hue, the S model data representing saturation and the V model data representing lightness can be obtained from the first image data HSV model data, respectively. And respectively projecting the H model data, the S model data and the V model data to a 32-dimensional space to obtain a 32768-dimensional color histogram. And selecting the 100-dimensional feature with the highest color histogram component as the color histogram feature of the first image data.
In this embodiment, obtaining the third texture feature of the first image data includes: filtering the first image data to obtain first edge image data of the first image data; and carrying out LBP processing on the first edge image data to obtain fourth LBP characteristic data representing the third texture characteristic.
In this embodiment, the third texture feature represents a border feature of the first image data; the border feature may specifically be a feature that characterizes whether the first image data has a border; the frame feature may specifically be a straight line feature presented in a region other than the region where the face is located in the first image data.
Specifically, to obtain the frame feature in the first image data, first, filtering processing is performed on the first image data to obtain a first edge image corresponding to the first image data. As an embodiment, a Sobel operator (specifically, two sets of matrices for lateral edge detection and longitudinal edge detection) may be used to perform a planar convolution with the pixel values in the first image data, so as to obtain a first edge image corresponding to the first image data. Further, performing gray level processing on the first edge image to obtain a gray level image corresponding to the first edge image, determining a relative gray level relationship between each feature point and eight adjacent feature points in the gray level image, for example, a gray level image of a feature point matrix of three by three, performing numerical representation on the gray level of each feature point, comparing the gray levels of the eight adjacent feature points with the gray level of a central feature point, and if the gray levels of the adjacent feature points are greater than the gray level of the central feature point, recording the value of the adjacent feature points as 1; on the contrary, if the gray scale of the adjacent feature points is less than or equal to that of the central feature point, the value of the adjacent feature points is recorded as 0; further, concatenating the values of adjacent feature points results in an 8-bit binary string, which can be understood as a gray value distributed over (0, 255). In the specific implementation process, as shown in fig. 4c, if the first feature point at the top left corner is used as the starting feature point and arranged clockwise, the obtained 8-bit character string is 10001111. A binary string corresponding to each feature point (i.e., the center feature point) in the process image may thus be obtained. Further, in order to remove redundancy, counting binary strings with 0 and 1 changes smaller than 2 in the binary strings corresponding to each feature point; for example, in 10001111 of the character string, the first and second bits 0 and 1 are changed 1 time, the fourth and fifth bits 0 and 1 are changed 1 time, and the change is performed twice in total, and the condition of "0 and 1 are changed less than 2" is not satisfied. For another example, in 00001111, the character string is changed by 0 and 1 times only from the fourth and fifth bits, and the condition of "0 and 1 change less than 2" is satisfied. Then, mapping the statistical binary character string into a range of (0, 58), wherein the mapped data can be used as fourth LBP characteristic data corresponding to the third texture characteristic; this also greatly reduces the amount of data processing. Because other smooth parts are filtered out, the fourth LBP characteristic data corresponding to the first edge image can highlight the edge part in the image and describe the frame characteristic of the image.
The above technical solution is to perform texture feature extraction on the first image data based on three characteristics. In this embodiment, a large amount of sample data is collected in advance, where the sample data may specifically include the first texture feature and the corresponding type (that is, a fuzzy type) extracted by using the above texture feature extraction method, and/or the second texture feature and the corresponding type (that is, a reflective type), and/or the third texture feature and the corresponding type (that is, a frame type), and the sample data may include at least one texture feature and a corresponding type of the above three texture features. And performing machine learning training on each type of texture feature to obtain a classification model corresponding to each type of texture feature. Specifically, corresponding to the fuzzy type, a corresponding first classification model is obtained. For example, as shown in fig. 5b, the first texture features obtained for the image data marked as blur type in advance each have stripe features, such as diagonal stripes in the first image and the third image in fig. 5b, approximate horizontal stripes in the second chapter image, and so on; machine learning training may be performed based on common features (e.g., streak features) in a first texture feature corresponding to the blur type, to obtain a first classification model corresponding to the first texture feature. Corresponding to the reflection type, a corresponding second classification model is obtained. And obtaining a corresponding third classification model corresponding to the frame type.
In this embodiment, the obtained texture features (including at least one of the following texture features: the first texture feature, the second texture feature, and the third texture feature) are input into the classification models of the corresponding types, so as to obtain corresponding first-class parameters. For example, inputting the obtained first texture feature into a first classification model corresponding to a blur type, and obtaining a first parameter corresponding to the first texture feature, wherein the first parameter represents a blur degree of the first image data; inputting the obtained second texture features into a second classification model corresponding to the light reflection type, and obtaining second parameters corresponding to the second texture features, wherein the second parameters represent the light reflection degree of the first image data; inputting the obtained third texture features into a third classification model corresponding to the frame type, and obtaining third parameters corresponding to the third texture features, wherein the third parameters represent whether the first image data contains frames or not. Further, a threshold value is correspondingly configured corresponding to each classification model, and when the obtained parameter is not greater than the corresponding threshold value, the person contained in the first image data is determined to be a non-living body, namely, the living body verification is determined not to pass; correspondingly, when the obtained parameters are larger than the corresponding threshold values, the following statistical classification results of the three characteristics are further combined for subsequent fusion judgment. For example, when the first parameter is not greater than a first threshold, or the second parameter is not greater than a second threshold, or the third parameter is not greater than a third threshold, it is determined that the person included in the first image data is non-living, i.e., it is determined that the living verification is not passed.
In this embodiment, the counting a fourth parameter corresponding to the first texture feature in the first image data includes: performing Gaussian filtering processing on the first image data to obtain Gaussian image data of the first image data; obtaining difference image data based on the first image data and the Gaussian image data, and obtaining gradient information of the difference image data as the fourth parameter.
Specifically, gaussian filtering processing is performed on the first image data to obtain gaussian image data; and counting gradient information of a difference image of the first image data and the Gaussian image data as the fourth parameter. The above process can be specifically realized by the following expression:
Gx(y,x)=Img(y,x+1)-Img(y,x-1) (5)
Bx(y,x)=Img(y,x+kernel)-Img(y,x-kernel) (6)
Vx(y,x)=max(0,Gx(y,x)-Bx(y,x)) (7)
Gy(y,x)=Img(y+1,x)-Img(y-1,x) (8)
By(y,x)=Img(y+kernel,x)-Img(y-kernel,x) (9)
Vy(y,x)=max(0,Gy(y,x)-By(y,x)) (10)
Blur=max(Sum(Gx)-Sum(Vx),Sum(Gy)-Sum(Vy)) (11)
wherein Gx (y, x) represents the gradient of the feature point (y, x) on the x-axis; bx (y, x) represents a difference value of two left and right pixels having a lateral distance of kenel from the feature point (y, x); where kernel denotes the distance that can be varied. Vx (y, x) represents an operation result of taking the maximum value between the difference value between Gx (y, x) and Bx (y, x) and 0; gy (y, x) represents the gradient of the feature point (y, x) on the y-axis; by (y, x) represents the difference between the upper and lower pixels at a longitudinal distance of kenel from the feature point (y, x); vy (y, x) represents an operation result of taking the maximum value between the difference between Gy (y, x) and By (y, x) and 0; blur represents a fourth parameter of the degree of Blur of the first image data; wherein sum (gx) represents a sum of gradients of each feature point in the first image data on the x-axis; sum (gy) represents the sum of gradients of each feature point in the first image data on the y-axis; sum (Vx) represents the sum of Vx corresponding to each feature point in the first image data; sum (Vy) represents the sum of Vy for each feature point in the first image data.
In this embodiment, the counting the fifth parameter corresponding to the second texture feature in the first image data includes: obtaining a reflected light image of the first image data; and carrying out binarization processing on the reflective image, partitioning the reflective image based on the image subjected to binarization processing, counting a first proportional relation of an area with brightness meeting a preset threshold value in each partitioned image in the corresponding partitioned image, and calculating the sum of the first proportional relations corresponding to all the partitioned images to serve as the fifth parameter. The above processing procedure can be specifically realized by the following expression:
Spec=sum(count(Rect(y,x)=1)/count(Rect)) (12)
wherein Spec represents a fifth parameter of degree of retroreflection of the first image data; rect (y, x) represents pixel values in the block image of the binarized reflected light image. count (rect) represents the number of all feature points in the segmented image of the reflectance image.
In this embodiment, counting a sixth parameter corresponding to the third texture feature in the first image data includes: identifying a region where a human face in the first image data is located; performing edge detection processing on the first image data to obtain second edge image data, and identifying a first straight line with a length meeting a first preset condition in the second edge image data; and extracting a second straight line which is positioned outside the region where the face is positioned in the first straight line and has a slope meeting a second preset condition, and counting the number of the second straight lines to serve as the sixth parameter.
Specifically, edge detection is performed on the first image data; as an embodiment, the edge detection may be performed on the first image data by using a Canny edge detection algorithm, which may specifically include: firstly, converting the first image data (specifically, RGB image data) into a gray image, and performing gaussian filtering processing on the gray image to remove image noise; further calculating image gradient information, and calculating the edge amplitude and direction of the image according to the image gradient information; applying non-maximum value inhibition to the image edge amplitude, only reserving the point with the maximum local amplitude change, and generating a refined edge; and the double-threshold edge detection is adopted and edges are connected, so that the extracted edge points have higher robustness, and the second edge image data is generated. Further, performing hough (hough) transformation on the second edge image data to find a straight line in the second edge image data; further, identifying a first straight line of which the length meets a first preset condition in all straight lines; as an embodiment, the identifying a first straight line of all straight lines, the length of which satisfies a first preset condition, includes: a straight line having a length exceeding half the width of the first image data among all straight lines is identified as a first straight line. On the other hand, in the process of analyzing the first image data, the face in the first image data is detected to obtain the area where the face is located, and the edge of the area where the face is located can be represented by the output face frame. Further identifying the first straight line to obtain a straight line which is outside the area where the face is located and has a slope meeting a second preset condition in the first straight line as a second straight line; wherein, the second straight line whose slope satisfies the second preset condition includes: a straight line which is outside the area where the face is located and has an angle with a straight line where the edge of the area where the face is located, the straight line not exceeding a preset angle, in the first straight line, is used as the second straight line; as an example, the preset angle is, for example, 30 degrees, but is not limited to the above-listed examples, of course. A schematic of the second straight line obtained may be as shown in fig. 7. The above-mentioned second line obtaining process can be realized by the following expression:
Line=sum(count(Canny(y,x)) (13)
wherein Line represents the number of second lines; sum represents the sum operation; canny (y, x) represents a straight line passing through the edge pixel point (y, x) processed by the Canny edge detection algorithm; count represents the number of lines that statistically pass through the edge pixel point (y, x).
In this embodiment, the determining a fusion parameter based on the first parameter, the second parameter, the third parameter, the fourth parameter, the fifth parameter, and the sixth parameter includes: respectively obtaining a first weight coefficient corresponding to the first parameter, a second weight coefficient corresponding to the second parameter, a third weight coefficient corresponding to the third parameter, a fourth weight coefficient corresponding to the fourth parameter, a fifth weight coefficient corresponding to the fifth parameter and a sixth weight coefficient corresponding to the sixth parameter by adopting a machine learning algorithm in advance; obtaining a first product of the first parameter and the first weight coefficient, a second product of the second parameter and the second weight coefficient, a third product of the third parameter and the third weight coefficient, a fourth product of the fourth parameter and the fourth weight coefficient, a fifth product of the fifth parameter and the fifth weight coefficient, and a sixth product of the sixth parameter and the sixth weight coefficient; and adding the first product, the second product, the third product, the fourth product, the fifth product and the sixth product to obtain the fusion parameter.
Specifically, the first parameter obtained by the above processing method is represented as Blur _ s, the second parameter is represented as Spec _ s, the third parameter is represented as Line _ s, the fourth parameter is represented as Blur, the fifth parameter is represented as Spec, and the sixth parameter is represented as Line. Further, machine learning of the weighted values can be performed by adopting a machine learning algorithm, fitting is respectively performed on the six-dimensional components, and the obtained fusion parameters satisfy the following expression:
Live=a1*Blur_s+a2*Spec_s+a3*Line_s+a4*Blur+a5*Spec+a6*Line(14)
further, comparing the obtained fusion parameters with a preset third threshold, and when the fusion parameters are smaller than the third threshold, judging that the human face is a non-living body, namely determining that the living body verification does not pass; correspondingly, when the fusion parameter is not less than the third threshold, the living body face is determined, that is, the living body verification is determined to be passed.
Based on the foregoing description, the stage 3, namely the living body verification process, shown in fig. 1 can refer to fig. 8, and includes a classification processing flow of three texture features and a statistical processing flow of the three texture features; of course, in other embodiments, the blur feature, the reflection feature, and the frame feature listed in the examples of the present invention are not limited, and texture features involved in other attack application scenarios are also within the protection scope of the embodiments of the present invention. Specifically, after the face detection in the image data is completed, the image data is respectively processed correspondingly, which includes: extracting fuzzy texture features in the image data, and inputting the fuzzy texture features into a fuzzy classifier to obtain a first parameter; comparing the first parameter with a first threshold, judging as a non-living human face when the first parameter is smaller than the first threshold, and sending the first parameter into a parameter fusion process when the first parameter is not smaller than the first threshold; extracting the light reflection texture features in the image data, and inputting the light reflection texture features into a light reflection classifier to obtain a second parameter; comparing the second parameter with a second threshold, judging as a non-living human face when the second parameter is smaller than the second threshold, and sending the second parameter into a parameter fusion process when the second parameter is not larger than the second threshold; extracting frame texture features in the image data, and inputting the frame texture features into a frame classifier to obtain a third parameter; comparing the third parameter with a third threshold, judging as a non-living human face when the third parameter is smaller than the third threshold, and sending the third parameter into a parameter fusion process when the third parameter is not smaller than the third threshold; counting fuzzy parameters (namely fourth parameters) in image data, comparing the fourth parameters with a fourth threshold, judging as a non-living human face when the fourth parameters are smaller than the fourth threshold, and sending the fourth parameters into a parameter fusion process when the fourth parameters are not smaller than the fourth threshold; counting a reflection parameter (namely a fifth parameter) in image data, comparing the fifth parameter with a fifth threshold, judging that the human face is a non-living body when the fifth parameter is smaller than the fifth threshold, and sending the fifth parameter into a parameter fusion process when the fifth parameter is not smaller than the fifth threshold; counting frame parameters (namely sixth parameters) in image data, comparing the sixth parameters with a sixth threshold, judging as a non-living human face when the sixth parameters are smaller than the sixth threshold, and sending the sixth parameters into a parameter fusion process when the sixth parameters are not smaller than the sixth threshold. And further performing parameter fusion on the first parameter, the second parameter, the third parameter, the fourth parameter, the fifth parameter and the sixth parameter, further comparing the fusion parameters with corresponding thresholds, judging as a non-living body face when the fusion parameters are smaller than the thresholds, judging as a living body face when the fusion parameters are not smaller than the thresholds, and further entering a stage 4 to send image data to a background for face verification.
The living body verification scheme of the embodiment of the present invention is not limited to passive judgment, but may be supplemented with active living body fusion judgment. Because the method has no conflict with the active living body, and the passive living body also has no negative interference on the invention in the aspect of user experience, the living body verification can be better realized with the active living body judgment. In the living body verification combining the active action and the passive action, the method can be used for preprocessing the judgment of the active action, namely, the follow-up action judgment is carried out only on the premise that the passive action is judged to be a real person, and the follow-up action judgment can be simultaneously carried out, namely, the user action is correct, but the user action is still possible to be judged as an attacker. This can more effectively prevent attacks of the video.
FIG. 9 is a diagram illustrating an effect curve of an in-vivo authentication method according to an embodiment of the present invention; the Receiver Operating Characteristics (ROC) of the subjects using different algorithms are shown in FIG. 9tic) curve; in the ROC curve shown in fig. 9, the horizontal axis represents the false passage rate and the vertical axis represents the accuracy rate. It can be seen that the in-vivo verification method of the embodiment of the present invention corresponds to the fused (combine) ROC curve, and when the error is very low, the accuracy is greatly improved to about 0.8, so that the technical scheme provided by the embodiment can well prevent attacks of attack samples of different types, and meanwhile, the verification of the real face can be completed, and the user experience is not affected. The invention does not depend on any equipment and user interaction, has little influence on the calculation complexity and belongs to a completely interference-free scheme. While other in vivo authentication methods using a single classification algorithm or statistical algorithm, such as that shown in FIG. 9, the ROC curve obtained using the fuzzy classification algorithm is BlursA corresponding curve; the ROC curve obtained by adopting a light reflection classification algorithm is SpecsA corresponding curve; the ROC curve obtained by adopting the frame classification algorithm is LinesA corresponding curve; an ROC curve obtained by adopting a fuzzy statistical algorithm is a curve corresponding to Blur; an ROC curve obtained by adopting a reflection statistical algorithm is a curve corresponding to Spec; an ROC curve obtained by adopting a frame statistical algorithm is a curve corresponding to Line; the accuracy of the six modes is far less than that of the fusion mode.
The embodiment of the invention also provides a device for the verification of the experience. FIG. 10 is a schematic diagram showing the constitution of a living body authentication device according to an embodiment of the present invention; as shown in fig. 10, the apparatus includes: an analysis unit 31, a classification unit 32, a statistic unit 33, and a fusion unit 34; wherein,
the analysis unit 31 is configured to obtain first image data and analyze the first image data;
the classification unit 32 is configured to obtain a texture feature of the first image data; the texture features characterize at least one of the following attribute features: the fuzzy characteristic of the first image data, the light reflection characteristic of the first image data and the frame characteristic of the first image data; obtaining a first class parameter corresponding to the texture feature based on a classification model;
the statistical unit 33 is configured to obtain a second type of parameter corresponding to the texture feature in the first image data based on a statistical processing manner; the second type of parameter is different from the first type of parameter;
the fusion unit 34 is configured to determine whether the first type parameter is greater than a first type threshold, and determine whether the second type parameter is greater than a second threshold; when the first type parameter is larger than a first type threshold value and the second type parameter is larger than a second type threshold value, determining a fusion parameter based on the first type parameter and the second type parameter; and when the fusion parameter is larger than the third type threshold value, determining that the living body verification is passed.
In an embodiment, the fusion unit 34 is further configured to determine that the living body verification fails when it is determined that the first type parameter is not greater than the first type threshold, or the second type parameter is not greater than the second type threshold, or the fusion parameter is not greater than the third type threshold.
Specifically, as an embodiment, the classifying unit 32 is configured to obtain a first texture feature, a second texture feature and a third texture feature of the first image data, respectively; the first texture feature characterizes a blur feature of the first image data; the second texture features characterize reflective features of the first image data; the third texture features characterize border features of the first image data; the system is further used for obtaining a first parameter corresponding to the first texture feature based on a first pre-configured classification model, obtaining a second parameter corresponding to the second texture feature based on a second pre-configured classification model, and obtaining a third parameter corresponding to the third texture feature based on a third pre-configured classification model;
the statistic unit 33 is configured to count a fourth parameter corresponding to the first texture feature, a fifth parameter corresponding to the second texture feature, and a sixth parameter corresponding to the third texture feature in the first image data.
Further, the fusion unit 34 is configured to determine a fusion parameter based on the first parameter, the second parameter, the third parameter, the fourth parameter, the fifth parameter and the sixth parameter when it is determined that the first parameter is greater than a first threshold, the second parameter is greater than a second threshold, the third parameter is greater than a third threshold, the fourth parameter is greater than a fourth threshold, the fifth parameter is greater than a fifth threshold, and the sixth parameter is greater than a sixth threshold.
As an embodiment, the fusion unit 34 is further configured to determine that the living body verification is not passed when the first parameter is not greater than the first threshold, or the second parameter is not greater than the second threshold, or the third parameter is not greater than the third threshold, or the fourth parameter is not greater than the fourth threshold, or the fifth parameter is not greater than the fifth threshold, or the sixth parameter is not greater than the sixth threshold, or the fusion parameter is not greater than the third type threshold.
Specifically, in this embodiment, the classifying unit 32 is configured to convert the first image data into HSV model data; carrying out LBP processing on the HSV model data, respectively obtaining first LBP characteristic data corresponding to hue data, second LBP characteristic data corresponding to saturation data and third LBP characteristic data corresponding to brightness data, and taking the first LBP characteristic data, the second LBP characteristic data and the third LBP characteristic data as the first texture characteristic.
Specifically, the first image data may specifically be RGB image data; the RGB data is converted into HSV model data, and H model data representing hue, S model data representing saturation, and V model data representing lightness can be obtained, respectively. And carrying out LBP processing on the H model data, the S model data and the V model data respectively so as to obtain image gradient information in the H model data, the S model data and the V model data. Taking H model data as an example, performing gray scale processing on the H model data to obtain a gray scale image of the H model data, and further determining a relative gray scale relationship between each feature point in the gray scale image and eight adjacent feature points, as shown in fig. 4a, the relative gray scale relationship is a gray scale image of a feature point matrix of three times three, and the gray scale of each feature point is shown in fig. 4a, for example; the gray scale value of each feature point is expressed in a numerical manner, which can be specifically shown in fig. 4 b. Further, comparing the gray levels of the eight adjacent feature points with the gray level of the central feature point, and if the gray levels of the eight adjacent feature points are greater than the gray level of the central feature point, recording the value of the eight adjacent feature points as 1; conversely, if the gray scale of the adjacent feature point is less than or equal to the gray scale of the central feature point, the value of the adjacent feature point is recorded as 0, which can be specifically seen in fig. 4 c. Further, concatenating the values of adjacent feature points results in an 8-bit binary string, which can be understood as a gray value distributed over (0, 255). In a specific implementation process, as shown in fig. 4c, if the first feature point at the top left corner is used as a starting feature point and arranged clockwise, the obtained 8-bit character string is 10001111. A binary string corresponding to each feature point (i.e., the center feature point) in the process image may thus be obtained. Further, in order to remove redundancy, counting binary strings with 0 and 1 changes smaller than 2 in the binary strings corresponding to each feature point; for example, in 10001111 of the character string, the first and second bits 0 and 1 are changed 1 time, the fourth and fifth bits 0 and 1 are changed 1 time, and the change is performed twice in total, and the condition of "0 and 1 are changed less than 2" is not satisfied. For another example, in 00001111, the character string is changed by 0 and 1 times only from the fourth and fifth bits, and the condition of "0 and 1 change less than 2" is satisfied. Then, mapping the statistical binary character string into a range of (0, 58), wherein the mapped data can be used as first LBP characteristic data corresponding to tone data; this also greatly reduces the amount of data processing.
Similarly, the second LBP feature data and the third LBP feature data may also be obtained by the above data method, and are not described herein again. Further, after the obtained first LBP feature data, second LBP feature data and third LBP feature data are concatenated to form the first texture feature, it may be understood that three 59-dimensional LBP feature data (including the first LBP feature data, the second LBP feature data and the third LBP feature data) are sequentially concatenated. FIG. 5a is a first texture feature extracted from image data previously determined to be a live face; fig. 5b shows a first texture feature extracted from image data of a face determined in advance to be a non-living body.
In this embodiment, the classifying unit 32 is configured to extract a light reflection feature of the first image data, extract a color histogram feature of the first image data, and use the light reflection feature and the color histogram feature as the second texture feature; the classification unit 32 is configured to obtain a reflectivity image of the first image data, and obtain a reflection image based on the first image data and the reflectivity image; and carrying out blocking processing on the reflectivity image to obtain image blocking gray scale statistical parameters as the light reflection characteristics.
Specifically, the second texture features characterizing the light reflecting features include two types: one is to describe the highlight areas of the image, i.e. the light reflection features; the other type corresponds to the color chromaticity variation caused by the difference of the image reflectivity, namely the color histogram feature. Since an image (for example, a print photograph included in an attack mode of a non-living human face, an image displayed on a display/display screen, and the like can be understood as a secondary shot) in the secondary shot is approximately planar, and the material is different from that of a real human face, the color is easily changed. Specifically, as first image data of an RGB image, a reflectance image of the first image data is obtained in an RGB color space, and a reflection image is obtained based on the first image data and the reflectance image; specifically, the reflection image is a difference between the first image data and the reflectance image thereof. Further, blocking the reflective image, and selecting mean and delta of image blocking as reflective characteristics; since the reflectance image is specifically a grayscale image, mean and delta of the image squares are specifically represented by mean and delta of the grayscale values. FIG. 6a is a left image of the collected image data corresponding to the non-living human face, and a right image of the collected image data is a reflected light image obtained after the image data is processed; fig. 6b shows the left image of the acquired image data corresponding to the face of the living body, and the right image of the acquired image data is the reflected light image obtained after the image data is processed.
For the color histogram feature, the H model data representing hue, the S model data representing saturation and the V model data representing lightness can be obtained from the first image data HSV model data, respectively. And respectively projecting the H model data, the S model data and the V model data to a 32-dimensional space to obtain a 32768-dimensional color histogram. And selecting the 100-dimensional feature with the highest color histogram component as the color histogram feature of the first image data.
In this embodiment, the classifying unit 32 is configured to perform filtering processing on the first image data to obtain first edge image data of the first image data; and carrying out LBP processing on the first edge image data to obtain fourth LBP characteristic data representing the third texture characteristic.
Specifically, to obtain the frame feature in the first image data, first, filtering processing is performed on the first image data to obtain a first edge image corresponding to the first image data. As an embodiment, a Sobel operator (specifically, two sets of matrices for lateral edge detection and longitudinal edge detection may be included) may be used to perform a planar convolution with the pixel values in the first image data, so as to obtain a first edge image corresponding to the first image data. Further, performing gray level processing on the first edge image to obtain a gray level image corresponding to the first edge image, determining a relative gray level relationship between each feature point and eight adjacent feature points in the gray level image, for example, a gray level image of a feature point matrix of three by three, performing numerical representation on the gray level of each feature point, comparing the gray levels of the eight adjacent feature points with the gray level of a central feature point, and if the gray levels of the adjacent feature points are greater than the gray level of the central feature point, recording the value of the adjacent feature points as 1; on the contrary, if the gray scale of the adjacent feature points is less than or equal to that of the central feature point, the value of the adjacent feature points is recorded as 0; further, concatenating the values of adjacent feature points results in an 8-bit binary string, which can be understood as a gray value distributed over (0, 255). In the specific implementation process, as shown in fig. 4c, if the first feature point at the top left corner is used as the starting feature point and arranged clockwise, the obtained 8-bit character string is 10001111. A binary string corresponding to each feature point (i.e., the center feature point) in the process image may thus be obtained. Further, in order to remove redundancy, counting binary strings with 0 and 1 changes smaller than 2 in the binary strings corresponding to each feature point; for example, in 10001111 of the character string, the first and second bits 0 and 1 are changed 1 time, the fourth and fifth bits 0 and 1 are changed 1 time, and the change is performed twice in total, and the condition of "0 and 1 are changed less than 2" is not satisfied. For another example, in 00001111, the character string is changed by 0 and 1 times only from the fourth and fifth bits, and the condition of "0 and 1 change less than 2" is satisfied. Then, mapping the statistical binary character string into a range of (0, 58), wherein the mapped data can be used as fourth LBP characteristic data corresponding to the third texture characteristic; this also greatly reduces the amount of data processing. Because other smooth parts are filtered out, the fourth LBP characteristic data corresponding to the first edge image can highlight the edge part in the image and describe the frame characteristic of the image.
The above technical solution is to perform texture feature extraction on the first image data based on three characteristics. In this embodiment, the classifying unit 32 acquires a large amount of sample data in advance, where the sample data may specifically include the first texture feature and the corresponding type (i.e., the blur type) extracted by using the above texture feature extraction method, and/or the second texture feature and the corresponding type (i.e., the reflection type), and/or the third texture feature and the corresponding type (i.e., the frame type), and the sample data may include at least one texture feature and a corresponding type of the three texture features. And performing machine learning training on each type of texture feature to obtain a classification model corresponding to each type of texture feature. Specifically, corresponding to the fuzzy type, a corresponding first classification model is obtained. For example, as shown in fig. 5b, the first texture features obtained for the image data marked as blur type in advance each have stripe features, such as diagonal stripes in the first image and the third image in fig. 5b, approximate horizontal stripes in the second chapter image, and so on; machine learning training may be performed based on common features (e.g., streak features) in a first texture feature corresponding to the blur type, to obtain a first classification model corresponding to the first texture feature. Corresponding to the reflection type, a corresponding second classification model is obtained. And obtaining a corresponding third classification model corresponding to the frame type.
In this embodiment, the classifying unit 32 inputs the obtained texture features (including at least one of the following texture features: the first texture feature, the second texture feature, and the third texture feature) into the classification models of the corresponding types to obtain the corresponding first-class parameters. For example, inputting the obtained first texture feature into a first classification model corresponding to a blur type, and obtaining a first parameter corresponding to the first texture feature, wherein the first parameter represents a blur degree of the first image data; inputting the obtained second texture features into a second classification model corresponding to the light reflection type, and obtaining second parameters corresponding to the second texture features, wherein the second parameters represent the light reflection degree of the first image data; inputting the obtained third texture features into a third classification model corresponding to the frame type, and obtaining third parameters corresponding to the third texture features, wherein the third parameters represent whether the first image data contains frames or not. Further, a threshold value is correspondingly configured corresponding to each classification model, and when the obtained parameter is not greater than the corresponding threshold value, the person contained in the first image data is determined to be a non-living body, namely, the living body verification is determined not to pass; correspondingly, when the obtained parameters are larger than the corresponding threshold values, the following statistical classification results of the three characteristics are further combined for subsequent fusion judgment. For example, when the first parameter is not greater than a first threshold, or the second parameter is not greater than a second threshold, or the third parameter is not greater than a third threshold, it is determined that the person included in the first image data is non-living, i.e., it is determined that the living verification is not passed.
In this embodiment, the statistical unit 33 is configured to perform gaussian filtering processing on the first image data to obtain gaussian image data of the first image data; obtaining difference image data based on the first image data and the Gaussian image data, and obtaining gradient information of the difference image data as the fourth parameter.
Specifically, gaussian filtering processing is performed on the first image data to obtain gaussian image data; and counting gradient information of a difference image of the first image data and the Gaussian image data as the fourth parameter.
In this embodiment, the statistical unit 33 is configured to obtain a reflection image of the first image data; and carrying out binarization processing on the reflective image, partitioning the reflective image based on the image subjected to binarization processing, counting a first proportional relation of an area with brightness meeting a preset threshold value in each partitioned image in the corresponding partitioned image, and calculating the sum of the first proportional relations corresponding to all the partitioned images to serve as the fifth parameter.
In this embodiment, the statistical unit 33 is configured to identify a region where a human face in the first image data is located; performing edge detection processing on the first image data to obtain second edge image data, and identifying a first straight line with a length meeting a first preset condition in the second edge image data; and extracting a second straight line which is positioned outside the region where the face is positioned in the first straight line and has a slope meeting a second preset condition, and counting the number of the second straight lines to serve as the sixth parameter.
Specifically, the statistical unit 33 performs edge detection on the first image data; as an embodiment, the edge detection may be performed on the first image data by using a Canny edge detection algorithm, which may specifically include: firstly, converting the first image data (specifically, RGB image data) into a gray image, and performing gaussian filtering processing on the gray image to remove image noise; further calculating image gradient information, and calculating the edge amplitude and direction of the image according to the image gradient information; applying non-maximum value inhibition to the image edge amplitude, only reserving the point with the maximum local amplitude change, and generating a refined edge; and the double-threshold edge detection is adopted and edges are connected, so that the extracted edge points have higher robustness, and the second edge image data is generated. Further, hough transformation is carried out on the second edge image data to find a straight line in the second edge image data; further, identifying a first straight line of which the length meets a first preset condition in all straight lines; as an embodiment, the identifying a first straight line of all straight lines, the length of which satisfies a first preset condition, includes: a straight line having a length exceeding half the width of the first image data among all straight lines is identified as a first straight line. On the other hand, in the process of analyzing the first image data, the face in the first image data is detected to obtain the area where the face is located, and the edge of the area where the face is located can be represented by the output face frame. Further identifying the first straight line to obtain a straight line which is outside the area where the face is located and has a slope meeting a second preset condition in the first straight line as a second straight line; wherein, the second straight line whose slope satisfies the second preset condition includes: a straight line which is outside the area where the face is located and has an angle with a straight line where the edge of the area where the face is located, the straight line not exceeding a preset angle, in the first straight line, is used as the second straight line; as an example, the preset angle is, for example, 30 degrees, but is not limited to the above-listed examples, of course.
In this embodiment, the fusion unit 34 is configured to respectively obtain, by using a machine learning algorithm in advance, a first weight coefficient corresponding to the first parameter, a second weight coefficient corresponding to the second parameter, a third weight coefficient corresponding to the third parameter, a fourth weight coefficient corresponding to the fourth parameter, a fifth weight coefficient corresponding to the fifth parameter, and a sixth weight coefficient corresponding to the sixth parameter; obtaining a first product of the first parameter and the first weight coefficient, a second product of the second parameter and the second weight coefficient, a third product of the third parameter and the third weight coefficient, a fourth product of the fourth parameter and the fourth weight coefficient, a fifth product of the fifth parameter and the fifth weight coefficient, and a sixth product of the sixth parameter and the sixth weight coefficient; and adding the first product, the second product, the third product, the fourth product, the fifth product and the sixth product to obtain the fusion parameter.
Specifically, the first parameter obtained by the above processing method is represented as Blur _ s, the second parameter is represented as Spec _ s, the third parameter is represented as Line _ s, the fourth parameter is represented as Blur, the fifth parameter is represented as Spec, and the sixth parameter is represented as Line. Further, machine learning of the weight value can be performed by adopting a machine learning algorithm, and fitting is performed on the six-dimensional components respectively, so that the obtained fusion parameters satisfy the expression (14); further, comparing the obtained fusion parameters with a preset third threshold, and when the fusion parameters are smaller than the third threshold, judging that the human face is a non-living body, namely determining that the living body verification does not pass; correspondingly, when the fusion parameter is not less than the third threshold, the living body face is determined, that is, the living body verification is determined to be passed.
In the embodiment of the present invention, the parsing Unit 31, the classifying Unit 32, the statistical Unit 33, and the fusing Unit 34 in the living body verification device may be implemented by a Central Processing Unit (CPU), a Digital Signal Processor (DSP), a Micro Control Unit (MCU), or a Programmable Gate Array (FPGA) in the terminal in practical application.
An embodiment of the present invention further provides a live-action verification apparatus, and an example of a live-action verification apparatus as a hardware entity is shown in fig. 11. The device comprises a processor 61, a storage medium 62, a camera 65 and at least one external communication interface 63; the processor 61, the storage medium 62, the camera 65, and the external communication interface 63 are all connected by a bus 64.
The living body verification method provided by the embodiment of the invention can be integrated in the living body verification equipment in the form of an algorithm and an algorithm library with any format; in particular, may be integrated in a client operable in the liveness verification device. In practical application, the algorithm can be packaged together with the client, when a user activates the client, namely, the in-vivo authentication function is started, the client calls the algorithm library and starts the camera, image data acquired by the camera is used as source data, and in-vivo judgment is carried out according to the acquired source data.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (22)

1. A method of experience evidencing, the method comprising:
acquiring first image data, analyzing the first image data, and acquiring texture features of the first image data;
obtaining a first class parameter corresponding to the texture feature based on a classification model;
obtaining a second type of parameters corresponding to the texture features in the first image data based on a statistical processing mode; the second type of parameter is different from the first type of parameter;
when the first type parameter is larger than a first type threshold value and the second type parameter is larger than a second type threshold value, determining a fusion parameter based on the first type parameter and the second type parameter;
when the fusion parameter is larger than a third threshold, determining that the living body verification is passed;
wherein the obtaining the texture feature of the first image data comprises:
respectively obtaining a first texture feature, a second texture feature and a third texture feature of the first image data; the first texture feature characterizes a blur feature of the first image data; the second texture features characterize reflective features of the first image data; the third texture features characterize border features of the first image data; the frame feature is a straight line feature presented in the first image data in a region outside the region where the face is located;
the obtaining of the first class parameters corresponding to the texture features based on the classification model includes: obtaining a first parameter corresponding to the first texture feature based on a first pre-configured classification model, obtaining a second parameter corresponding to the second texture feature based on a second pre-configured classification model, and obtaining a third parameter corresponding to the third texture feature based on a third pre-configured classification model;
the obtaining of the second type of parameters corresponding to the texture features in the first image data based on the statistical processing mode includes: and counting a fourth parameter corresponding to the first texture feature, a fifth parameter corresponding to the second texture feature and a sixth parameter corresponding to the third texture feature in the first image data.
2. The method of claim 1, further comprising: and when the first type parameter is judged not to be larger than the first type threshold, or the second type parameter is not larger than the second type threshold, or the fusion parameter is not larger than the third type threshold, determining that the living body verification is not passed.
3. The method of claim 1, wherein determining a fusion parameter based on the first class parameter and the second class parameter when the first class parameter is greater than a first class threshold and the second class parameter is greater than a second class threshold comprises:
determining a fusion parameter based on the first parameter, the second parameter, the third parameter, the fourth parameter, the fifth parameter, and the sixth parameter when the first parameter is greater than a first threshold, the second parameter is greater than a second threshold, the third parameter is greater than a third threshold, the fourth parameter is greater than a fourth threshold, the fifth parameter is greater than a fifth threshold, and the sixth parameter is greater than a sixth threshold.
4. The method of claim 2, wherein determining that the first type parameter is not greater than the first type threshold, or the second type parameter is not greater than the second type threshold, or the fusion parameter is not greater than the third type threshold, comprises:
and determining that the living body verification is not passed when the first parameter is not more than a first threshold, or the second parameter is not more than a second threshold, or the third parameter is not more than a third threshold, or the fourth parameter is not more than a fourth threshold, or the fifth parameter is not more than a fifth threshold, or the sixth parameter is not more than a sixth threshold, or the fusion parameter is not more than the third threshold.
5. The method of claim 1, wherein obtaining the first texture feature of the first image data comprises:
converting the first image data into hue saturation HSV model data; performing Local Binary Pattern (LBP) processing on the HSV model data, respectively obtaining first LBP characteristic data corresponding to hue data, second LBP characteristic data corresponding to saturation data and third LBP characteristic data corresponding to brightness data, and taking the first LBP characteristic data, the second LBP characteristic data and the third LBP characteristic data as the first texture characteristics.
6. The method of claim 1, wherein obtaining the second texture feature of the first image data comprises:
extracting the light reflection feature of the first image data, extracting the color histogram feature of the first image data, and taking the light reflection feature and the color histogram feature as the second texture feature;
wherein the extracting of the reflective feature of the first image data includes: obtaining a reflectivity image of the first image data, and obtaining a reflection image based on the first image data and the reflectivity image; and carrying out blocking processing on the reflectivity image to obtain image blocking gray scale statistical parameters as the light reflection characteristics.
7. The method of claim 1, wherein obtaining third texture features of the first image data comprises:
filtering the first image data to obtain first edge image data of the first image data;
and carrying out LBP processing on the first edge image data to obtain fourth LBP characteristic data representing the third texture characteristic.
8. The method of claim 1, wherein the counting fourth parameters corresponding to the first texture feature in the first image data comprises:
performing Gaussian filtering processing on the first image data to obtain Gaussian image data of the first image data;
obtaining difference image data based on the first image data and the Gaussian image data, and obtaining gradient information of the difference image data as the fourth parameter.
9. The method of claim 1, wherein counting fifth parameters corresponding to the second texture features in the first image data comprises:
obtaining a reflected light image of the first image data; and carrying out binarization processing on the reflective image, partitioning the reflective image based on the image subjected to binarization processing, counting a first proportional relation of an area with brightness meeting a preset threshold value in each partitioned image in the corresponding partitioned image, and calculating the sum of the first proportional relations corresponding to all the partitioned images to serve as the fifth parameter.
10. The method of claim 1, wherein counting a sixth parameter corresponding to the third texture feature in the first image data comprises:
identifying a region where a human face in the first image data is located;
performing edge detection processing on the first image data to obtain second edge image data, and identifying a first straight line with a length meeting a first preset condition in the second edge image data;
and extracting a second straight line which is positioned outside the region where the face is positioned in the first straight line and has a slope meeting a second preset condition, and counting the number of the second straight lines to serve as the sixth parameter.
11. The method of claim 3, wherein determining a fusion parameter based on the first parameter, the second parameter, the third parameter, the fourth parameter, the fifth parameter, and the sixth parameter comprises:
respectively obtaining a first weight coefficient corresponding to the first parameter, a second weight coefficient corresponding to the second parameter, a third weight coefficient corresponding to the third parameter, a fourth weight coefficient corresponding to the fourth parameter, a fifth weight coefficient corresponding to the fifth parameter and a sixth weight coefficient corresponding to the sixth parameter by adopting a machine learning algorithm in advance;
obtaining a first product of the first parameter and the first weight coefficient, a second product of the second parameter and the second weight coefficient, a third product of the third parameter and the third weight coefficient, a fourth product of the fourth parameter and the fourth weight coefficient, a fifth product of the fifth parameter and the fifth weight coefficient, and a sixth product of the sixth parameter and the sixth weight coefficient;
and adding the first product, the second product, the third product, the fourth product, the fifth product and the sixth product to obtain the fusion parameter.
12. A presentation verification device, the device comprising: the device comprises an analysis unit, a classification unit, a statistic unit and a fusion unit; wherein,
the analysis unit is used for obtaining first image data and analyzing the first image data;
the classification unit is used for obtaining the texture features of the first image data; obtaining a first class parameter corresponding to the texture feature based on a classification model;
the statistical unit is used for obtaining a second type of parameters corresponding to the texture features in the first image data based on a statistical processing mode; the second type of parameter is different from the first type of parameter;
the fusion unit is used for judging whether the first type parameter is larger than a first type threshold value or not and judging whether the second type parameter is larger than a second type threshold value or not; when the first type parameter is larger than a first type threshold value and the second type parameter is larger than a second type threshold value, determining a fusion parameter based on the first type parameter and the second type parameter; when the fusion parameter is larger than a third threshold, determining that the living body verification is passed;
wherein the obtaining the texture feature of the first image data comprises:
respectively obtaining a first texture feature, a second texture feature and a third texture feature of the first image data; the first texture feature characterizes a blur feature of the first image data; the second texture features characterize reflective features of the first image data; the third texture features characterize border features of the first image data; the frame feature is a straight line feature presented in the first image data in a region outside the region where the face is located;
the obtaining of the first class parameters corresponding to the texture features based on the classification model includes: obtaining a first parameter corresponding to the first texture feature based on a first pre-configured classification model, obtaining a second parameter corresponding to the second texture feature based on a second pre-configured classification model, and obtaining a third parameter corresponding to the third texture feature based on a third pre-configured classification model;
the obtaining of the second type of parameters corresponding to the texture features in the first image data based on the statistical processing mode includes: and counting a fourth parameter corresponding to the first texture feature, a fifth parameter corresponding to the second texture feature and a sixth parameter corresponding to the third texture feature in the first image data.
13. The device according to claim 12, wherein the fusion unit is further configured to determine that the living body authentication is not passed when it is determined that the first type parameter is not greater than the first type threshold, or the second type parameter is not greater than the second type threshold, or the fusion parameter is not greater than the third type threshold.
14. The apparatus of claim 12, wherein the fusion unit is configured to determine a fusion parameter based on the first parameter, the second parameter, the third parameter, the fourth parameter, the fifth parameter, and the sixth parameter when it is determined that the first parameter is greater than a first threshold, the second parameter is greater than a second threshold, the third parameter is greater than a third threshold, the fourth parameter is greater than a fourth threshold, the fifth parameter is greater than a fifth threshold, and the sixth parameter is greater than a sixth threshold.
15. The device of claim 13, wherein the fusion unit is further configured to determine that the in vivo authentication is not passed when the first parameter is not greater than a first threshold, or the second parameter is not greater than a second threshold, or the third parameter is not greater than a third threshold, or the fourth parameter is not greater than a fourth threshold, or the fifth parameter is not greater than the threshold, or the sixth parameter is not greater than a sixth threshold, or the fusion parameter is not greater than the third type threshold.
16. The apparatus according to claim 12, characterized by said classification unit for converting the first image data into HSV model data; carrying out LBP processing on the HSV model data, respectively obtaining first LBP characteristic data corresponding to hue data, second LBP characteristic data corresponding to saturation data and third LBP characteristic data corresponding to brightness data, and taking the first LBP characteristic data, the second LBP characteristic data and the third LBP characteristic data as the first texture characteristic.
17. The apparatus according to claim 12, wherein the classifying unit is configured to extract a reflection feature of the first image data and extract a color histogram feature of the first image data, and use the reflection feature and the color histogram feature as the second texture feature;
the classification unit is used for obtaining a reflectivity image of the first image data and obtaining a reflection image based on the first image data and the reflectivity image; and carrying out blocking processing on the reflectivity image to obtain image blocking gray scale statistical parameters as the light reflection characteristics.
18. The apparatus according to claim 12, wherein the classifying unit is configured to perform filtering processing on the first image data to obtain first edge image data of the first image data; and carrying out LBP processing on the first edge image data to obtain fourth LBP characteristic data representing the third texture characteristic.
19. The apparatus according to claim 12, wherein the statistical unit is configured to perform gaussian filtering processing on the first image data to obtain gaussian image data of the first image data; obtaining difference image data based on the first image data and the Gaussian image data, and obtaining gradient information of the difference image data as the fourth parameter.
20. The apparatus according to claim 12, wherein the statistical unit is configured to obtain a reflectance image of the first image data; and carrying out binarization processing on the reflective image, partitioning the reflective image based on the image subjected to binarization processing, counting a first proportional relation of an area with brightness meeting a preset threshold value in each partitioned image in the corresponding partitioned image, and calculating the sum of the first proportional relations corresponding to all the partitioned images to serve as the fifth parameter.
21. The apparatus according to claim 12, wherein the statistical unit is configured to identify a region where a human face is located in the first image data; performing edge detection processing on the first image data to obtain second edge image data, and identifying a first straight line with a length meeting a first preset condition in the second edge image data; and extracting a second straight line which is positioned outside the region where the face is positioned in the first straight line and has a slope meeting a second preset condition, and counting the number of the second straight lines to serve as the sixth parameter.
22. The apparatus according to claim 13, wherein the fusion unit is configured to obtain a first weight coefficient corresponding to the first parameter, a second weight coefficient corresponding to the second parameter, a third weight coefficient corresponding to the third parameter, a fourth weight coefficient corresponding to the fourth parameter, a fifth weight coefficient corresponding to the fifth parameter, and a sixth weight coefficient corresponding to the sixth parameter by using a machine learning algorithm in advance; obtaining a first product of the first parameter and the first weight coefficient, a second product of the second parameter and the second weight coefficient, a third product of the third parameter and the third weight coefficient, a fourth product of the fourth parameter and the fourth weight coefficient, a fifth product of the fifth parameter and the fifth weight coefficient, and a sixth product of the sixth parameter and the sixth weight coefficient; and adding the first product, the second product, the third product, the fourth product, the fifth product and the sixth product to obtain the fusion parameter.
CN201710175495.5A 2017-03-22 2017-03-22 A kind of living body verification method and equipment Active CN106951869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710175495.5A CN106951869B (en) 2017-03-22 2017-03-22 A kind of living body verification method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710175495.5A CN106951869B (en) 2017-03-22 2017-03-22 A kind of living body verification method and equipment

Publications (2)

Publication Number Publication Date
CN106951869A CN106951869A (en) 2017-07-14
CN106951869B true CN106951869B (en) 2019-03-15

Family

ID=59472685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710175495.5A Active CN106951869B (en) 2017-03-22 2017-03-22 A kind of living body verification method and equipment

Country Status (1)

Country Link
CN (1) CN106951869B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609463B (en) * 2017-07-20 2021-11-23 百度在线网络技术(北京)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium
CN107609494A (en) * 2017-08-31 2018-01-19 北京飞搜科技有限公司 A kind of human face in-vivo detection method and system based on silent formula
CN108304708A (en) * 2018-01-31 2018-07-20 广东欧珀移动通信有限公司 Mobile terminal, face unlocking method and related product
CN109145716B (en) * 2018-07-03 2019-04-16 南京思想机器信息科技有限公司 Boarding gate verifying bench based on face recognition
CN109558794B (en) * 2018-10-17 2024-06-28 平安科技(深圳)有限公司 Moire-based image recognition method, device, equipment and storage medium
CN111178112B (en) * 2018-11-09 2023-06-16 株式会社理光 Face recognition device
CN109815960A (en) * 2018-12-21 2019-05-28 深圳壹账通智能科技有限公司 Reproduction image-recognizing method, device, equipment and medium based on deep learning
CN109740572B (en) * 2019-01-23 2020-09-29 浙江理工大学 Human face living body detection method based on local color texture features
CN110263708B (en) * 2019-06-19 2020-03-13 郭玮强 Image source identification method, device and computer readable storage medium
CN113221842B (en) * 2021-06-04 2023-12-29 第六镜科技(北京)集团有限责任公司 Model training method, image recognition method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778457A (en) * 2015-04-18 2015-07-15 吉林大学 Video face identification algorithm on basis of multi-instance learning
CN105389554A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Face-identification-based living body determination method and equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389553A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Living body detection method and apparatus
CN105243376A (en) * 2015-11-06 2016-01-13 北京汉王智远科技有限公司 Living body detection method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778457A (en) * 2015-04-18 2015-07-15 吉林大学 Video face identification algorithm on basis of multi-instance learning
CN105389554A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Face-identification-based living body determination method and equipment

Also Published As

Publication number Publication date
CN106951869A (en) 2017-07-14

Similar Documents

Publication Publication Date Title
CN106951869B (en) A kind of living body verification method and equipment
JP6778247B2 (en) Image and feature quality for eye blood vessels and face recognition, image enhancement and feature extraction, and fusion of eye blood vessels with facial and / or subface regions for biometric systems
US8682029B2 (en) Rule-based segmentation for objects with frontal view in color images
CN109086718A (en) Biopsy method, device, computer equipment and storage medium
CN110084135A (en) Face identification method, device, computer equipment and storage medium
CN108416291B (en) Face detection and recognition method, device and system
CN107316029B (en) A kind of living body verification method and equipment
CN110390643B (en) License plate enhancement method and device and electronic equipment
CN110059607B (en) Living body multiplex detection method, living body multiplex detection device, computer equipment and storage medium
JP2020518879A (en) Detection system, detection device and method thereof
JPWO2017061106A1 (en) Information processing apparatus, image processing system, image processing method, and program
CN111832464A (en) Living body detection method and device based on near-infrared camera
CN115082992A (en) Face living body detection method and device, electronic equipment and readable storage medium
CN113743378A (en) Fire monitoring method and device based on video
CN114529958A (en) Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
CN113963391A (en) Silent in-vivo detection method and system based on binocular camera
KR102669584B1 (en) Method and device for detecting animol biometric information
Ige et al. Exploring Face Recognition under Complex Lighting Conditions with HDR Imaging.
Popoola et al. Development of a face detection algorithm based on skin segmentation and facial feature extraction
CN116052237A (en) Face recognition method and device based on short video
BR122018007964B1 (en) METHOD IMPLEMENTED BY COMPUTER AND SYSTEM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant