CN108399401A - Method and apparatus for detecting facial image - Google Patents
Method and apparatus for detecting facial image Download PDFInfo
- Publication number
- CN108399401A CN108399401A CN201810256946.2A CN201810256946A CN108399401A CN 108399401 A CN108399401 A CN 108399401A CN 201810256946 A CN201810256946 A CN 201810256946A CN 108399401 A CN108399401 A CN 108399401A
- Authority
- CN
- China
- Prior art keywords
- facial image
- living body
- body faces
- vivo detection
- detection model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for detecting facial image.One specific implementation mode of this method includes:Obtain the facial image to be detected of target face;Facial image to be detected is input to the first In vivo detection model of training in advance, obtain testing result, wherein, whether the first In vivo detection model is living body faces for determining the corresponding face of facial image of input, and the training sample of the first In vivo detection model detects that the testing result of living body faces forms by the facial image of living body faces and for characterizing.This embodiment improves the flexibilities of In vivo detection.
Description
Technical field
The invention relates to field of computer technology, and in particular to the method and apparatus for detecting facial image.
Background technology
With the development of computer technology, the application of face recognition technology is more and more extensive.In field of face identification, pass through
The biological characteristic of organism can not only distinguish bion, additionally it is possible to judge the physical condition of bion.For example,
It may determine that whether biology is live body by organism image;Organism image is also used as the basis for estimation of unlock.
However, using the means such as take pictures, image, the non-living body facial image or non-living body face of living body faces can be obtained
Video etc..It can be with personation identity, to carry out damage interests of another using non-living body facial image, non-living body face video
Improper activity.In industry-by-industry (such as financial industry), face recognition technology has been gradually available for remotely opening an account, has withdrawn the money, pays,
To carry out authentication.The accuracy of In vivo detection result is often concerning the vital interests of user.
Invention content
The embodiment of the present application proposes the method and apparatus for detecting facial image.
In a first aspect, the embodiment of the present application provides a kind of method for detecting facial image, this method includes:It obtains
The facial image to be detected of target face;Facial image to be detected is input to the first In vivo detection model of training in advance, is obtained
To testing result, wherein whether the first In vivo detection model is live body people for determining the corresponding face of facial image of input
Face, the training sample of the first In vivo detection model is by the facial image of living body faces and for characterizing the inspection for detecting living body faces
Survey result composition.
In some embodiments, the first In vivo detection model is trained as follows obtains:Obtain the first live body
The training sample set of detection model, wherein training sample detects live body by the facial image of living body faces and for characterizing
The testing result of face forms;Using machine learning algorithm, the face for including by each training sample in training sample set
Image is as input, and using for characterizing the testing result for detecting living body faces as output, training obtains the first In vivo detection
Model.
In some embodiments, the first In vivo detection model includes the second In vivo detection model trained in advance;And it will
Facial image to be detected is input to the first In vivo detection model of training in advance, obtains testing result, including:Extract people to be detected
The characteristic of face image;Characteristic is input to the second In vivo detection model, obtains testing result, wherein the second live body
Whether detection model is living body faces, the training of the second In vivo detection model for determining the corresponding face of characteristic of input
Sample by living body faces facial image characteristic and for characterizing detect that the testing result of living body faces forms.
In some embodiments, the first In vivo detection model includes the first probabilistic model trained in advance;And it will be to be checked
The first In vivo detection model that facial image is input to training in advance is surveyed, testing result is obtained, including:By facial image to be detected
It is input to the first probabilistic model, obtains the probability that target face is living body faces, wherein the first probabilistic model is to be checked for determining
Survey the probability that the corresponding face of facial image is living body faces, the training sample of the first probabilistic model by living body faces face figure
Picture and the corresponding face of facial image are the probability compositions of living body faces;Size based on probability and preset probability threshold value is closed
System generates testing result.
In some embodiments, the first In vivo detection model includes the second probabilistic model trained in advance;And it will be to be checked
The first In vivo detection model that facial image is input to training in advance is surveyed, testing result is obtained, including:Extract face figure to be detected
The characteristic of picture;Characteristic is input to the second probabilistic model, obtains the probability that target face is living body faces, wherein
Second probabilistic model is used to determine the probability that the corresponding face of characteristic is living body faces, the training sample of the second probabilistic model
It is made of the probability that the corresponding face of characteristic and facial image of the facial image of living body faces is living body faces;Based on general
The magnitude relationship of rate and preset probability threshold value generates testing result.
In some embodiments, the first In vivo detection model is single category support vector machines.
Second aspect, the embodiment of the present application provide a kind of device for detecting facial image, which includes:It obtains
Unit is configured to obtain the facial image to be detected of target face;Input unit is configured to facial image to be detected is defeated
Enter to the first In vivo detection model of training in advance, obtain testing result, wherein the first In vivo detection model is inputted for determining
The corresponding face of facial image whether be living body faces, the training sample of the first In vivo detection model by living body faces face
Image and for characterize detect living body faces testing result composition.
In some embodiments, the first In vivo detection model is trained as follows obtains:Obtain the first live body
The training sample set of detection model, wherein training sample detects live body by the facial image of living body faces and for characterizing
The testing result of face forms;Using machine learning algorithm, the face for including by each training sample in training sample set
Image is as input, and using for characterizing the testing result for detecting living body faces as output, training obtains the first In vivo detection
Model.
In some embodiments, the first In vivo detection model includes the second In vivo detection model trained in advance;And it is defeated
Entering unit includes:Extraction module is configured to extract the characteristic of facial image to be detected;Input module, be configured to by
Characteristic is input to the second In vivo detection model, obtains testing result, wherein the second In vivo detection model is inputted for determining
The corresponding face of characteristic whether be living body faces, the training sample of the second In vivo detection model by living body faces face
The characteristic of image and for characterize detect living body faces testing result composition.
In some embodiments, the first In vivo detection model includes the first probabilistic model trained in advance;And input is single
Member includes:Input module is configured to facial image to be detected being input to the first probabilistic model, and it is live body to obtain target face
The probability of face, wherein the probability that the first probabilistic model is used to determine the corresponding face of facial image to be detected as living body faces,
The training sample of first probabilistic model is the general of living body faces by the corresponding face of facial image and facial image of living body faces
Rate forms;Generation module is configured to the magnitude relationship based on probability Yu preset probability threshold value, generates testing result.
In some embodiments, the first In vivo detection model includes the second probabilistic model trained in advance;And input is single
Member includes:Extraction module is configured to extract the characteristic of facial image to be detected;Input module is configured to feature
Data are input to the second probabilistic model, obtain the probability that target face is living body faces, wherein the second probabilistic model is for determining
The corresponding face of characteristic is the probability of living body faces, the training sample of the second probabilistic model by living body faces facial image
Characteristic and the corresponding face of facial image be living body faces probability composition;Generation module is configured to be based on probability
With the magnitude relationship of preset probability threshold value, testing result is generated.
In some embodiments, the first In vivo detection model is single category support vector machines.
The third aspect, the embodiment of the present application provide a kind of server for detecting facial image, including:One or more
A processor;Storage device, for storing one or more programs, when said one or multiple programs are by said one or multiple
Processor executes so that the one or more processors realize any embodiment in the method as above-mentioned for detecting facial image
Method.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium for detecting facial image, thereon
It is stored with computer program, any reality in the method as above-mentioned for detecting facial image is realized when which is executed by processor
The method for applying example.
Method and apparatus provided by the embodiments of the present application for detecting facial image, by obtaining the to be checked of target face
Facial image is surveyed, then facial image to be detected is input to the first In vivo detection model of training in advance, obtain testing result,
Wherein, whether the first In vivo detection model is living body faces, the first live body for determining the corresponding face of facial image of input
The training sample of detection model detects that the testing result of living body faces forms by the facial image of living body faces and for characterizing,
To improve the flexibility of In vivo detection.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that the embodiment of the present application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for detecting facial image of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for detecting facial image of the application;
Fig. 4 is the flow chart according to another embodiment of the method for detecting facial image of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for detecting facial image of the application;
Fig. 6 is adapted for the structural schematic diagram of the computer system of the server for realizing the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the embodiment of the present application for detecting the method for facial image or for detecting face figure
The exemplary system architecture 100 of the embodiment of the device of picture.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted by network 104 with server 105 with using terminal equipment 101,102,103, to receive or send out
Send data (such as facial image to be detected) etc..Camera can be installed on terminal device 101,102,103, with to live body people
Face or non-living body face are imaged, and generate facial image.It is also equipped on terminal device 101,102,103 various
For client application of data transmission, such as instant messaging tools, social platform software etc..By above-mentioned client application,
Facial image can be sent to other equipment (such as server 105) by terminal device 101,102,103.
Terminal device 101,102,103 can be hardware, can also be software.When terminal device 101,102,103 is hard
Can be the various electronic equipments that there is imaging function and support data transmission when part, including but not limited to smart mobile phone, flat
Plate computer, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 is software, Ke Yian
In above-mentioned cited electronic equipment.Multiple softwares or software module may be implemented into (such as providing distribution in it
The software or software module of service), single software or software module can also be implemented as.It is not specifically limited herein.
Server 105 can be to provide the server of various services, such as the people to the transmission of terminal device 101,102,103
Face image carries out the In vivo detection server of the processing such as living body faces detection.In vivo detection server can be to the face that receives
The data such as image carry out the processing such as In vivo detection, and handling result (such as is detected living body faces or non-living body for characterizing
The testing result of face) feed back to terminal device.
It should be noted that the method for detecting facial image that the embodiment of the present application is provided is generally by server
105 execute, and correspondingly, the device for detecting facial image is generally positioned in server 105.
It should be noted that server can be hardware, can also be software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server can also be implemented as.It, can when server is software
To be implemented as multiple softwares or software module (such as providing the software or software module of Distributed Services), can also realize
At single software or software module.It is not specifically limited herein.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server;Other equipment can also be increased (such as imaging device, to deposit
Store up equipment etc.).It, should when the electronic equipment of information processing method operation thereon need not carry out data transmission with other equipment
System architecture can not include network.
With continued reference to Fig. 2, the stream of one embodiment of the method for detecting facial image according to the application is shown
Journey 200.The method for being used to detect facial image, includes the following steps:
Step 201, the facial image to be detected of target face is obtained.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for detecting facial image can
With by wired connection mode either radio connection from other electronic equipments (such as terminal device shown in FIG. 1) or this
Ground obtains the facial image to be detected of target face.Wherein, above-mentioned target face can be living body faces, can also be non-living body
Face.Living body faces can be the face for having lived people.Non-living body face is the face of the people without life.Non-living body
Face can be the face presented in the form of image, video, face mask, manikin etc..Specifically, non-living body face can be with
It includes the image of face including the video etc. of face to be.Facial image to be detected can be for detecting above-mentioned target face
Whether be living body faces facial image.
In practice, above-mentioned facial image to be detected can when being irradiated to target face using various light sources, be passed through
The facial image that imaging device is imaged target face.Above-mentioned facial image to be detected can be with image, video
Etc. forms present facial image.
Step 202, facial image to be detected is input to the first In vivo detection model of training in advance, obtains detection knot
Fruit.
In the present embodiment, based on the facial image to be detected obtained in step 201, above-mentioned executive agent can will be to be checked
The first In vivo detection model that facial image is input to training in advance is surveyed, testing result is obtained.Wherein, the first In vivo detection model
For determining whether the corresponding face of facial image of input is living body faces.The training sample of first In vivo detection model is by living
The facial image of body face and for characterize detect living body faces testing result composition.Illustratively, above-mentioned first live body
Detection model can be based on principal component analysis (Principal Components Analysis, PCA) algorithm model or
Model based on rarefaction representation (Sparse Representations).Herein, the above-mentioned mould based on principal component analysis algorithm
The training method of type and model based on rarefaction representation, is the known technology that those skilled in the art generally study, herein not
It repeats again.
It should be noted that in practice, it is typically diversified for the appearance form of non-living body face, shape is presented
Formula can include but is not limited to:The forms such as image, video, face mask, manikin.The appearance form of non-living body face is usual
In the case of be not easy exhaustion.However, the appearance form of living body faces is then relatively fixed.Therefore, using by living body faces
Facial image and for characterize detect living body faces testing result composition training sample as above-mentioned first In vivo detection
The training sample of model contributes to the complexity for reducing the training sample for building above-mentioned first In vivo detection model, contributes to
The training time is saved, and can ensure the accuracy rate of testing result.
In some optional realization methods of the present embodiment, the first In vivo detection model can be single class Support Vector
Machine (One Class SVM).
In practice, the training for traditional disaggregated model usually determines training sample firstly the need of for each classification,
Also, the quantity per class training sample should be relative equilibrium.Therefore, and not all model can use uneven number
According to as training sample.However, how the application of above-mentioned list category support vector machines, then can solve based on uneven training sample
This (such as the unbalanced training sample of positive negative training sample) obtains the problem of accuracy rate higher disaggregated model.
In some optional realization methods of the present embodiment, the first In vivo detection model is trained as follows
It arrives:Obtain the training sample set of the first In vivo detection model, wherein training sample by living body faces facial image and use
The testing result composition of living body faces is detected in characterization;Using machine learning algorithm, by each instruction in training sample set
Practice the facial image that sample includes and is used as input, it will be for characterizing the testing result for detecting living body faces as output, training
Obtain the first In vivo detection model.
Above-mentioned training, which obtains the step of the first In vivo detection model, to be executed as follows:
First, above-mentioned executive agent or other electronic equipments can obtain the training sample set of the first In vivo detection model
It closes.Wherein, training sample detects that the testing result of living body faces forms by the facial image of living body faces and for characterizing.
Then, above-mentioned executive agent or other electronic equipments can utilize machine learning algorithm, by training sample set
In each training sample facial image for including as input, using for characterize detect the testing results of living body faces as
Output, is trained initial model, obtains the first In vivo detection model.Wherein, above-mentioned initial model can be single classification branch
Vector machine is held, can also be based on principal component analysis (Projection Combined PCA, the PCPCA) algorithm for combining projection
Model either based on singular value disturbance principal component analysis (Singular-value-perturbed PCA, SPCA) algorithm
Model.
In some optional realization methods of the present embodiment, the first In vivo detection model can also be as follows
What training obtained:
First, technical staff can obtain a large amount of training sample.Wherein, training sample by living body faces facial image
Detect that the testing result of living body faces forms with for characterizing.
Then, technical staff can be to the facial image of each living body faces and for characterizing the inspection for detecting living body faces
It surveys result to be arranged and counted, and associated storage is in bivariate table or database.And associated storage there is into above-mentioned facial image
Bivariate table or database with testing result is as the first In vivo detection model.
In some optional realization methods of the present embodiment, the first In vivo detection model includes that second trained in advance lives
Body detection model;And facial image to be detected is input to the first In vivo detection model of training in advance, testing result is obtained,
Including:Extract the characteristic of facial image to be detected;Characteristic is input to the second In vivo detection model, obtains detection knot
Fruit, wherein whether the second In vivo detection model is living body faces for determining the corresponding face of characteristic of input, and second lives
The training sample of body detection model by living body faces facial image characteristic and for characterizing detect living body faces
Testing result forms.Illustratively, above-mentioned second In vivo detection model can be model based on principal component analysis algorithm or
Model based on rarefaction representation.Features described above data can be including but not limited at least one of following:Texture information, brightness letter
Breath, marginal information, material information, color information.
Optionally, the characteristic that a variety of methods extract facial image to be detected may be used in above-mentioned executive agent.It is above-mentioned
Method includes but not limited to:Image characteristic extracting method based on convolutional neural networks (CNN) model is based on deep neural network
(DNN) image characteristic extracting method, Fourier (Fourier) converter technique, window Fourier (Fourier) converter technique, small echo
Converter technique, least square method, edge direction histogram method, the texture feature extraction etc. based on Tamura textural characteristics.
In some optional realization methods of the present embodiment, above-mentioned second In vivo detection model can be that single classification is supported
Vector machine.
Above-mentioned training, which obtains the step of the second In vivo detection model, to be executed as follows:
First, above-mentioned executive agent or other electronic equipments can obtain the training sample set of the second In vivo detection model
It closes.Wherein, training sample by living body faces facial image characteristic and for characterizing the detection for detecting living body faces
As a result it forms.
Then, above-mentioned executive agent or other electronic equipments can utilize machine learning algorithm, by training sample set
In each training sample facial image for including characteristic as input, will be for characterizing the inspection for detecting living body faces
Result is surveyed as output, initial model is trained, the second In vivo detection model is obtained.Wherein, above-mentioned initial model can be with
It is single category support vector machines, can also be that the model based on the algorithm of principal component analysis for combining projection is either based on singular value
The model of the algorithm of principal component analysis of disturbance.
In some optional realization methods of the present embodiment, the second In vivo detection model can also be as follows
What training obtained:
First, technical staff can obtain a large amount of training sample.Wherein, training sample by living body faces facial image
Characteristic and for characterize detect living body faces testing result composition.
Then, technical staff can detect work to the characteristic of the facial image of each living body faces and for characterizing
The testing result of body face is arranged and is counted, and associated storage is in bivariate table or database.And on associated storage is had
The characteristic of facial image and the bivariate table of testing result or database are stated as the second In vivo detection model.
In some optional realization methods of the present embodiment, the first In vivo detection model include train in advance it is first general
Rate model;And facial image to be detected is input to the first In vivo detection model of training in advance, testing result is obtained, is wrapped
It includes:Facial image to be detected is input to the first probabilistic model, obtains the probability that target face is living body faces, wherein first
Probabilistic model is used to determine the probability that the corresponding face of facial image to be detected is living body faces, the training sample of the first probabilistic model
This is made of the probability that the corresponding face of facial image and facial image of living body faces is living body faces;Based on probability and preset
Probability threshold value magnitude relationship, generate testing result.Wherein, above-mentioned probability can be marked artificially, can also be above-mentioned
What executive agent or other equipment determined.Above-mentioned probability threshold value can be determined by technical staff.For example, probability threshold value can be
90%, 80% etc..Above-mentioned first probabilistic model can be model based on principal component analysis algorithm or be based on rarefaction representation
Model.
In some optional realization methods of the present embodiment, above-mentioned first probabilistic model can be single class Support Vector
Machine.
In practice, above-mentioned probability is typically can be as the parameter of single category support vector machines, according to the parameter, above-mentioned electricity
Sub- equipment can determine above-mentioned probability.
The step of above-mentioned training obtains the first probabilistic model can execute as follows:
First, above-mentioned executive agent or other electronic equipments can obtain the training sample set of the first probabilistic model.
Wherein, training sample is made of the probability that the corresponding face of facial image and facial image of living body faces is living body faces.
Then, above-mentioned executive agent or other electronic equipments can utilize machine learning algorithm, by training sample set
In each training sample facial image for including as input, the probability that the corresponding face of facial image is living body faces is made
For output, initial model is trained, the first probabilistic model is obtained.Wherein, above-mentioned initial model can be that single classification is supported
Vector machine can also be the principal component that the model based on the algorithm of principal component analysis for combining projection is either disturbed based on singular value
The model of parser.
In some optional realization methods of the present embodiment, the first probabilistic model can also be trained as follows
It obtains:
First, technical staff can obtain a large amount of training sample.Wherein, training sample by living body faces facial image
Face corresponding with facial image is the probability composition of living body faces.
Then, technical staff can be live body to the facial image of each living body faces face corresponding with the facial image
The probability (such as 100%) of face is arranged and is counted, and associated storage is in bivariate table or database.And by associated storage
Above-mentioned facial image face corresponding with the facial image be the probability of living body faces bivariate table or database as first
Probabilistic model.
It is appreciated that in some cases, when above-mentioned probability is more than preset probability threshold value (such as 90%), above-mentioned electricity
Sub- equipment can be generated for characterizing the testing result for detecting living body faces.
In some optional realization methods of the present embodiment, above-mentioned executive agent can also store above-mentioned testing result,
Or the above-mentioned facial image to be detected of associated storage and above-mentioned testing result.
In some optional realization methods of the present embodiment, above-mentioned executive agent can also export above-mentioned testing result,
Or above-mentioned testing result is sent to terminal device (such as terminal device shown in FIG. 1).
It is one of the application scenarios of the method according to the present embodiment for detecting facial image with continued reference to Fig. 3, Fig. 3
Schematic diagram.In the application scenarios of Fig. 3, user has sent the to be checked of target face by terminal device to server 301 first
Facial image 3011 is surveyed, then server 301 is after receiving facial image 3011 to be detected, by facial image to be detected
3011 are input to the first In vivo detection model trained using aforesaid way, have obtained testing result 3012.
The method that above-described embodiment of the application provides pre- first passes through positive sample by the way that facial image to be detected to be input to
(i.e. by the facial image of living body faces and for characterizing the training sample for detecting that the testing result of living body faces forms) training
The first In vivo detection model, obtain testing result, to reduce the acquisition difficulty of training sample, contribute to ensure detect
As a result under the premise of Detection accuracy, simplify the complexity of the first In vivo detection model training, which thereby enhance In vivo detection
Flexibility.
With further reference to Fig. 4, it illustrates the flows 400 of another embodiment of the method for detecting facial image.
This is used to detect the flow 400 of the method for facial image, includes the following steps:
Step 401, the facial image to be detected of target face is obtained.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for detecting facial image can
With by wired connection mode either radio connection from other electronic equipments (such as terminal device shown in FIG. 1) or this
Ground obtains the facial image to be detected of target face.Wherein, the first In vivo detection model includes the second probability mould trained in advance
Type.Above-mentioned target face can be living body faces, can also be non-living body face.Living body faces can be the lived people of tool
Face.Non-living body face is the face of the people without life.Non-living body face can be with image, video, face mask, people
The face that the forms such as body Model are presented.Non-living body face can be include the image of face including the video etc. of face.It is to be checked
Survey facial image can be for detect above-mentioned target face whether be living body faces facial image.
In practice, above-mentioned facial image to be detected can be using various light sources (such as visible light etc.) to target face into
When row irradiation, target face is imaged by imaging device facial image.Above-mentioned facial image to be detected can be with
It is the facial image presented in the form of image, video etc..
Step 402, the characteristic of facial image to be detected is extracted.
In the present embodiment, above-mentioned executive agent can extract the characteristic of facial image to be detected.Wherein, above-mentioned spy
Levying data can be including but not limited at least one of following:Texture information, luminance information, marginal information, material information, color letter
Breath.
Optionally, the characteristic that a variety of methods extract facial image to be detected may be used in above-mentioned executive agent.It is above-mentioned
Method includes but not limited to:Image characteristic extracting method based on convolutional neural networks (CNN) model is based on deep neural network
(DNN) image characteristic extracting method, Fourier (Fourier) converter technique, window Fourier (Fourier) converter technique, small echo
Converter technique, least square method, edge direction histogram method, the texture feature extraction etc. based on Tamura textural characteristics.
Step 403, characteristic is input to the second probabilistic model, obtains the probability that target face is living body faces.
In the present embodiment, characteristic is input to the second probabilistic model, it is the general of living body faces to obtain target face
Rate.Wherein, the second probabilistic model is used to determine the probability that the corresponding face of characteristic is living body faces, the second probabilistic model
Training sample by living body faces facial image characteristic and the corresponding face of facial image be living body faces probability group
At.Wherein, above-mentioned probability can be marked artificially, can also be that above-mentioned executive agent or other equipment determine.It is above-mentioned general
Rate threshold value can be determined by technical staff.For example, probability threshold value can be 90%, 80% etc..Above-mentioned second probabilistic model can
To be model based on principal component analysis algorithm or based on the model of rarefaction representation.
In some optional realization methods of the present embodiment, above-mentioned second probabilistic model can be single class Support Vector
Machine.
In practice, above-mentioned probability is typically can be as the parameter of single category support vector machines, according to the parameter, above-mentioned electricity
Sub- equipment can determine above-mentioned probability.
The step of above-mentioned training obtains the second probabilistic model can execute as follows:
First, above-mentioned executive agent or other electronic equipments can obtain the training sample set of the second probabilistic model.
Wherein, training sample by living body faces facial image characteristic and the corresponding face of facial image be the general of living body faces
Rate forms.
Then, above-mentioned executive agent or other electronic equipments can utilize machine learning algorithm, by training sample set
In each training sample characteristic for including as input, the probability that the corresponding face of facial image is living body faces is made
For output, initial model is trained, the second probabilistic model is obtained.Wherein, above-mentioned initial model can be that single classification is supported
Vector machine can also be the principal component that the model based on the algorithm of principal component analysis for combining projection is either disturbed based on singular value
The model of parser.
In some optional realization methods of the present embodiment, the second probabilistic model can also be trained as follows
It obtains:
First, technical staff can obtain a large amount of training sample.Wherein, training sample by living body faces facial image
Characteristic and the corresponding face of facial image be living body faces probability composition.
Then, technical staff can be corresponding with the facial image to the characteristic of the facial image of each living body faces
Face is that the probability (such as 100%) of living body faces is arranged and counted, and associated storage is in bivariate table or database.And
It is the bivariate table or data of the probability of living body faces that associated storage, which is had above-mentioned facial image face corresponding with the facial image,
Library is as the second probabilistic model.
Step 404, the magnitude relationship based on probability Yu preset probability threshold value generates testing result.
In the present embodiment, above-mentioned executive agent can be based on the probability that step 403 obtains and preset probability threshold value
Magnitude relationship generates testing result
It is appreciated that in some cases, when above-mentioned probability is less than preset probability threshold value (such as 70%), above-mentioned electricity
Sub- equipment can be generated for characterizing the testing result for detecting non-living body face.
Figure 4, it is seen that compared with the corresponding embodiments of Fig. 2, it is used to detect facial image in the present embodiment
The flow 400 of method highlights the characteristic for extracting facial image to be detected and feature based data and determine the probability is to be checked
The step of whether corresponding face of facial image is living body faces surveyed.The scheme of the present embodiment description can be taken cleverer as a result,
Mode living determines whether the corresponding face of facial image to be detected is living body faces, to further improve In vivo detection
Flexibility.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for detecting people
One embodiment of the device of face image, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which specifically may be used
To be applied in various electronic equipments.
As shown in figure 5, the device 500 for detecting facial image of the present embodiment includes:Acquiring unit 501 and input are single
Member 502.Wherein, acquiring unit 501 is configured to obtain the facial image to be detected of target face;The configuration of input unit 502 is used
In the first In vivo detection model that facial image to be detected is input to training in advance, testing result is obtained, wherein the first live body
Whether detection model is living body faces, the training of the first In vivo detection model for determining the corresponding face of facial image of input
Sample detects that the testing result of living body faces forms by the facial image of living body faces and for characterizing.
In the present embodiment, the acquiring unit 501 of the device 500 for detecting facial image can be by wired connection side
Either radio connection obtains target face from other electronic equipments (such as terminal device shown in FIG. 1) or locally to formula
Facial image to be detected.Wherein, above-mentioned target face can be living body faces, can also be non-living body face.Living body faces can
To be the face for having lived people.Non-living body face is the face of the people without life.Non-living body face can be to scheme
The face that the forms such as picture, video, face mask, manikin are presented.Non-living body face can be include face image including
Video of face etc..Facial image to be detected can be for detecting whether above-mentioned target face is face for living body faces
Image.
In practice, above-mentioned facial image to be detected can when being irradiated to target face using various light sources, be passed through
The facial image that imaging device is imaged target face.Above-mentioned facial image to be detected can be with image, video
Etc. forms present facial image.
In the present embodiment, the facial image to be detected obtained based on receiving unit 501, above-mentioned input unit 502 can be with
Facial image to be detected is input to the first In vivo detection model of training in advance, obtains testing result.Wherein, the first live body is examined
Whether survey model for determining the corresponding face of facial image of input is living body faces.The training sample of first In vivo detection model
This detects that the testing result of living body faces forms by the facial image of living body faces and for characterizing.Illustratively, above-mentioned
One In vivo detection model can be the mould based on principal component analysis (Principal Components Analysis, PCA) algorithm
Type or the model for being based on rarefaction representation (Sparse Representations).Herein, above-mentioned to be calculated based on principal component analysis
The training method of the model of method and model based on rarefaction representation, is the known technology that those skilled in the art generally study,
Details are not described herein.
In some optional realization methods of the present embodiment, the first In vivo detection model can be single class Support Vector
Machine (One Class SVM).
In some optional realization methods of the present embodiment, the first In vivo detection model is trained as follows
It arrives:Obtain the training sample set of the first In vivo detection model, wherein training sample by living body faces facial image and use
The testing result composition of living body faces is detected in characterization;Using machine learning algorithm, by each instruction in training sample set
Practice the facial image that sample includes and is used as input, it will be for characterizing the testing result for detecting living body faces as output, training
Obtain the first In vivo detection model.
In some optional realization methods of the present embodiment, the first In vivo detection model includes that second trained in advance lives
Body detection model;And input unit includes:Extraction module (not shown) is configured to extract facial image to be detected
Characteristic;Input module (not shown) is configured to characteristic being input to the second In vivo detection model, is examined
Survey result, wherein whether the second In vivo detection model is living body faces for determining the corresponding face of characteristic of input, the
The training sample of two In vivo detection models by living body faces facial image characteristic and for characterize detect live body people
The testing result of face forms.
In some optional realization methods of the present embodiment, the first In vivo detection model include train in advance it is first general
Rate model;And input unit includes:Input module (not shown) is configured to facial image to be detected being input to
Probabilistic model obtains the probability that target face is living body faces, wherein the first probabilistic model is for determining face figure to be detected
As the probability that corresponding face is living body faces, the training sample of the first probabilistic model by living body faces facial image and face
The corresponding face of image is the probability composition of living body faces;Generation module (not shown) is configured to based on probability and presets
Probability threshold value magnitude relationship, generate testing result.
In some optional realization methods of the present embodiment, the first In vivo detection model include train in advance it is second general
Rate model;And input unit includes:Extraction module (not shown) is configured to extract the feature of facial image to be detected
Data;Input module (not shown) is configured to characteristic being input to the second probabilistic model, obtains target face and is
The probability of living body faces, wherein the second probabilistic model is used to determine the probability that the corresponding face of characteristic is living body faces, the
The training sample of two probabilistic models by living body faces facial image characteristic and the corresponding face of facial image be live body
The probability of face forms;Generation module (not shown) is configured to the size based on probability and preset probability threshold value and closes
System generates testing result.
The device that above-described embodiment of the application provides obtains the face to be detected of target face by acquiring unit 501
Image, then input unit 502 by facial image to be detected be input in advance training the first In vivo detection model, detected
As a result, wherein whether the first In vivo detection model is living body faces for determining the corresponding face of facial image of input, first
The training sample of In vivo detection model is by the facial image of living body faces and for characterizing the testing result for detecting living body faces
Composition, to improve the flexibility of In vivo detection.
Below with reference to Fig. 6, it illustrates the computer systems 600 suitable for the server for realizing the embodiment of the present application
Structural schematic diagram.Server shown in Fig. 6 is only an example, should not be to the function and use scope band of the embodiment of the present application
Carry out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various actions appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
It is connected to I/O interfaces 605 with lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net executes communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 610, as needed in order to be read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed by communications portion 609 from network, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes
Above-mentioned function.
It should be noted that computer-readable medium described herein can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two arbitrarily combines.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or arbitrary above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more conducting wires, just
It takes formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type and may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, can be any include computer readable storage medium or storage journey
The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this
In application, computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated,
Wherein carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for
By instruction execution system, device either device use or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned
Any appropriate combination.
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses
The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet
Include acquiring unit and input unit.Wherein, the title of these units does not constitute the limit to the unit itself under certain conditions
It is fixed, for example, acquiring unit is also described as " obtaining the unit of the facial image to be detected of target face ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in server described in above-described embodiment;Can also be individualism, and without be incorporated the server in.It is above-mentioned
Computer-readable medium carries one or more program, when said one or multiple programs are executed by the server,
Make the server:Obtain the facial image to be detected of target face;Facial image to be detected is input to the of training in advance
One In vivo detection model, obtains testing result, wherein the first In vivo detection model is corresponding for determining the facial image of input
Whether face is living body faces, and the training sample of the first In vivo detection model is by the facial image of living body faces and for characterizing inspection
Measure the testing result composition of living body faces.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (14)
1. a kind of method for detecting facial image, including:
Obtain the facial image to be detected of target face;
The facial image to be detected is input to the first In vivo detection model of training in advance, obtains testing result, wherein institute
State the first In vivo detection model for determine input the corresponding face of facial image whether be living body faces, first live body
The training sample of detection model detects that the testing result of living body faces forms by the facial image of living body faces and for characterizing.
2. according to the method described in claim 1, wherein, the first In vivo detection model is trained as follows obtains
's:
Obtain the training sample set of the first In vivo detection model, wherein training sample by living body faces facial image and use
The testing result composition of living body faces is detected in characterization;
Using machine learning algorithm, the facial image for including using each training sample in the training sample set is as defeated
Enter, using for characterizing the testing result for detecting living body faces as output, training obtains the first In vivo detection model.
3. according to the method described in claim 1, wherein, the first In vivo detection model includes the second live body trained in advance
Detection model;And
The first In vivo detection model that the facial image to be detected is input to training in advance, obtains testing result, wraps
It includes:
Extract the characteristic of the facial image to be detected;
The characteristic is input to the second In vivo detection model, obtains testing result, wherein the second live body inspection
Whether be living body faces, the instruction of the second In vivo detection model if surveying model for determining the corresponding face of characteristic of input
Practice sample by living body faces facial image characteristic and for characterizing detect that the testing result of living body faces forms.
4. according to the method described in claim 1, wherein, the first In vivo detection model includes the first probability trained in advance
Model;And
The first In vivo detection model that the facial image to be detected is input to training in advance, obtains testing result, wraps
It includes:
The facial image to be detected is input to first probabilistic model, it is the general of living body faces to obtain the target face
Rate, wherein first probabilistic model is used to determine the probability that the corresponding face of facial image to be detected is living body faces, described
The training sample of first probabilistic model is the general of living body faces by the corresponding face of facial image and facial image of living body faces
Rate forms;
Magnitude relationship based on the probability Yu preset probability threshold value generates testing result.
5. according to the method described in claim 1, wherein, the first In vivo detection model includes the second probability trained in advance
Model;And
The first In vivo detection model that the facial image to be detected is input to training in advance, obtains testing result, wraps
It includes:
Extract the characteristic of the facial image to be detected;
The characteristic is input to second probabilistic model, obtains the probability that the target face is living body faces,
In, second probabilistic model is used to determine the probability that the corresponding face of characteristic is living body faces, the second probability mould
The training sample of type by living body faces facial image characteristic and the corresponding face of facial image be the general of living body faces
Rate forms;
Magnitude relationship based on the probability Yu preset probability threshold value generates testing result.
6. according to the method described in claim 1, wherein, the first In vivo detection model is single category support vector machines.
7. a kind of device for detecting facial image, including:
Acquiring unit is configured to obtain the facial image to be detected of target face;
Input unit is configured to for the facial image to be detected to be input to the first In vivo detection model of training in advance, obtain
To testing result, wherein whether the first In vivo detection model is living for determining the corresponding face of facial image of input
The training sample of body face, the first In vivo detection model detects live body by the facial image of living body faces and for characterizing
The testing result of face forms.
8. device according to claim 7, wherein the first In vivo detection model is trained as follows obtains
's:
Obtain the training sample set of the first In vivo detection model, wherein training sample by living body faces facial image and use
The testing result composition of living body faces is detected in characterization;
Using machine learning algorithm, the facial image for including using each training sample in the training sample set is as defeated
Enter, using for characterizing the testing result for detecting living body faces as output, training obtains the first In vivo detection model.
9. device according to claim 7, wherein the first In vivo detection model includes the second live body trained in advance
Detection model;And
The input unit includes:
Extraction module is configured to extract the characteristic of the facial image to be detected;
Input module is configured to the characteristic being input to the second In vivo detection model, obtains testing result,
In, whether the second In vivo detection model is living body faces for determining the corresponding face of characteristic of input, described the
The training sample of two In vivo detection models by living body faces facial image characteristic and for characterize detect live body people
The testing result of face forms.
10. device according to claim 7, wherein the first In vivo detection model include train in advance it is first general
Rate model;And
The input unit includes:
Input module is configured to the facial image to be detected being input to first probabilistic model, obtains the target
Face is the probability of living body faces, wherein first probabilistic model is for determining that the corresponding face of facial image to be detected is
The training sample of the probability of living body faces, first probabilistic model is corresponding by the facial image and facial image of living body faces
Face is the probability composition of living body faces;
Generation module is configured to the magnitude relationship based on the probability Yu preset probability threshold value, generates testing result.
11. device according to claim 7, wherein the first In vivo detection model include train in advance it is second general
Rate model;And
The input unit includes:
Extraction module is configured to extract the characteristic of the facial image to be detected;
Input module is configured to the characteristic being input to second probabilistic model, obtains the target face and is
The probability of living body faces, wherein second probabilistic model is used to determine that the corresponding face of characteristic to be the general of living body faces
Rate, the training sample of second probabilistic model by living body faces facial image characteristic and the corresponding people of facial image
Face is the probability composition of living body faces;
Generation module is configured to the magnitude relationship based on the probability Yu preset probability threshold value, generates testing result.
12. device according to claim 7, wherein the first In vivo detection model is single category support vector machines.
13. a kind of server, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors so that one or more of processors are real
The now method as described in any in claim 1-6.
14. a kind of computer-readable medium, is stored thereon with computer program, wherein real when described program is executed by processor
The now method as described in any in claim 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810256946.2A CN108399401B (en) | 2018-03-27 | 2018-03-27 | Method and device for detecting face image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810256946.2A CN108399401B (en) | 2018-03-27 | 2018-03-27 | Method and device for detecting face image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108399401A true CN108399401A (en) | 2018-08-14 |
CN108399401B CN108399401B (en) | 2022-05-03 |
Family
ID=63093185
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810256946.2A Active CN108399401B (en) | 2018-03-27 | 2018-03-27 | Method and device for detecting face image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108399401B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108682089A (en) * | 2018-09-05 | 2018-10-19 | 上海聚虹光电科技有限公司 | Self-service no card withdrawal method based on iris and recognition of face |
WO2020048140A1 (en) * | 2018-09-07 | 2020-03-12 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, electronic device, and computer readable storage medium |
CN111488756A (en) * | 2019-01-25 | 2020-08-04 | 杭州海康威视数字技术股份有限公司 | Face recognition-based living body detection method, electronic device, and storage medium |
CN113780222A (en) * | 2021-09-17 | 2021-12-10 | 深圳市繁维科技有限公司 | Face living body detection method and device, electronic equipment and readable storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1910977A1 (en) * | 2005-07-29 | 2008-04-16 | Telecom Italia S.p.A. | Automatic biometric identification based on face recognition and support vector machines |
CN103886301A (en) * | 2014-03-28 | 2014-06-25 | 中国科学院自动化研究所 | Human face living detection method |
CN104965589A (en) * | 2015-06-13 | 2015-10-07 | 东莞市微模式软件有限公司 | Human living body detection method and device based on human brain intelligence and man-machine interaction |
CN105718906A (en) * | 2016-01-25 | 2016-06-29 | 宁波大学 | Living body face detection method based on SVD-HMM |
CN106874857A (en) * | 2017-01-19 | 2017-06-20 | 腾讯科技(上海)有限公司 | A kind of living body determination method and system based on video analysis |
CN107545241A (en) * | 2017-07-19 | 2018-01-05 | 百度在线网络技术(北京)有限公司 | Neural network model is trained and biopsy method, device and storage medium |
CN107545248A (en) * | 2017-08-24 | 2018-01-05 | 北京小米移动软件有限公司 | Biological characteristic biopsy method, device, equipment and storage medium |
-
2018
- 2018-03-27 CN CN201810256946.2A patent/CN108399401B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1910977A1 (en) * | 2005-07-29 | 2008-04-16 | Telecom Italia S.p.A. | Automatic biometric identification based on face recognition and support vector machines |
US20090074259A1 (en) * | 2005-07-29 | 2009-03-19 | Madalina Baltatu | Automatic biometric identification based on face recognition and support vector machines |
CN103886301A (en) * | 2014-03-28 | 2014-06-25 | 中国科学院自动化研究所 | Human face living detection method |
CN104965589A (en) * | 2015-06-13 | 2015-10-07 | 东莞市微模式软件有限公司 | Human living body detection method and device based on human brain intelligence and man-machine interaction |
CN105718906A (en) * | 2016-01-25 | 2016-06-29 | 宁波大学 | Living body face detection method based on SVD-HMM |
CN106874857A (en) * | 2017-01-19 | 2017-06-20 | 腾讯科技(上海)有限公司 | A kind of living body determination method and system based on video analysis |
CN107545241A (en) * | 2017-07-19 | 2018-01-05 | 百度在线网络技术(北京)有限公司 | Neural network model is trained and biopsy method, device and storage medium |
CN107545248A (en) * | 2017-08-24 | 2018-01-05 | 北京小米移动软件有限公司 | Biological characteristic biopsy method, device, equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
YAOHUI DING ET AL: "An Ensemble of One-Class SVMs for Fingerprint Spoof Detection Across Different Fabrication Materials", 《INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108682089A (en) * | 2018-09-05 | 2018-10-19 | 上海聚虹光电科技有限公司 | Self-service no card withdrawal method based on iris and recognition of face |
WO2020048140A1 (en) * | 2018-09-07 | 2020-03-12 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, electronic device, and computer readable storage medium |
CN110889312A (en) * | 2018-09-07 | 2020-03-17 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, electronic device, computer-readable storage medium |
KR20200050994A (en) * | 2018-09-07 | 2020-05-12 | 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 | Biometric detection methods and devices, electronic devices, computer readable storage media |
JP2020535529A (en) * | 2018-09-07 | 2020-12-03 | 北京市商▲湯▼科技▲開▼▲発▼有限公司Beijing Sensetime Technology Development Co., Ltd. | Biological detection methods and devices, electronic devices, and computer-readable storage media |
KR102324697B1 (en) * | 2018-09-07 | 2021-11-10 | 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 | Biometric detection method and device, electronic device, computer readable storage medium |
US11222222B2 (en) | 2018-09-07 | 2022-01-11 | Beijing Sensetime Technology Development Co., Ltd. | Methods and apparatuses for liveness detection, electronic devices, and computer readable storage media |
CN110889312B (en) * | 2018-09-07 | 2022-09-02 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, electronic device, computer-readable storage medium |
CN111488756A (en) * | 2019-01-25 | 2020-08-04 | 杭州海康威视数字技术股份有限公司 | Face recognition-based living body detection method, electronic device, and storage medium |
CN111488756B (en) * | 2019-01-25 | 2023-10-03 | 杭州海康威视数字技术股份有限公司 | Face recognition-based living body detection method, electronic device, and storage medium |
CN113780222A (en) * | 2021-09-17 | 2021-12-10 | 深圳市繁维科技有限公司 | Face living body detection method and device, electronic equipment and readable storage medium |
CN113780222B (en) * | 2021-09-17 | 2024-02-27 | 深圳市繁维科技有限公司 | Face living body detection method and device, electronic equipment and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108399401B (en) | 2022-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107578017A (en) | Method and apparatus for generating image | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN109446990A (en) | Method and apparatus for generating information | |
CN108494778A (en) | Identity identifying method and device | |
CN108154196A (en) | For exporting the method and apparatus of image | |
CN109086719A (en) | Method and apparatus for output data | |
CN109034069A (en) | Method and apparatus for generating information | |
CN108986169A (en) | Method and apparatus for handling image | |
CN109447156A (en) | Method and apparatus for generating model | |
CN108171203A (en) | For identifying the method and apparatus of vehicle | |
CN108399401A (en) | Method and apparatus for detecting facial image | |
CN109344752A (en) | Method and apparatus for handling mouth image | |
CN109255337A (en) | Face critical point detection method and apparatus | |
CN109241934A (en) | Method and apparatus for generating information | |
CN108062544A (en) | For the method and apparatus of face In vivo detection | |
CN109086780A (en) | Method and apparatus for detecting electrode piece burr | |
CN108427941A (en) | Method, method for detecting human face and device for generating Face datection model | |
CN108960110A (en) | Method and apparatus for generating information | |
CN108335390A (en) | Method and apparatus for handling information | |
CN108509921A (en) | Method and apparatus for generating information | |
CN108509888A (en) | Method and apparatus for generating information | |
CN109214501A (en) | The method and apparatus of information for identification | |
CN107729928A (en) | Information acquisition method and device | |
CN108446658A (en) | The method and apparatus of facial image for identification | |
CN110059624A (en) | Method and apparatus for detecting living body |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |