CN108537152B - Method and apparatus for detecting living body - Google Patents
Method and apparatus for detecting living body Download PDFInfo
- Publication number
- CN108537152B CN108537152B CN201810259543.3A CN201810259543A CN108537152B CN 108537152 B CN108537152 B CN 108537152B CN 201810259543 A CN201810259543 A CN 201810259543A CN 108537152 B CN108537152 B CN 108537152B
- Authority
- CN
- China
- Prior art keywords
- image
- face image
- initial
- face
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application discloses a method and a device for detecting a living body. One embodiment of the method comprises: acquiring a human face image to be detected; inputting a facial image to be detected into a pre-trained feature extraction model to obtain image features corresponding to the facial image to be detected, wherein the feature extraction model is used for extracting the features of the facial image; inputting the obtained image features into a generator in a pre-trained generation countermeasure network to obtain a generated image corresponding to the face image to be detected, wherein the generation countermeasure network comprises the generator and a discriminator, and the feature extraction model and the generation countermeasure network are obtained based on live face image set training; and generating a living body detection result corresponding to the face image to be detected based on the similarity between the face image to be detected and the obtained generated image. The embodiment improves the convenience of the living body detection for the user and improves the speed of the living body detection.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for detecting a living body.
Background
In liveness detection, a video of a user making a specified action (e.g., nodding, shaking, raising, lowering, blinking, etc.) may be first recorded and then the recorded video may be analyzed to give a liveness detection result. However, making the specified action causes inconvenience in use to the user, and the time for analyzing the video is also relatively long.
Disclosure of Invention
The embodiment of the application provides a method and a device for detecting a living body.
In a first aspect, an embodiment of the present application provides a method for detecting a living body, the method including: acquiring a human face image to be detected; inputting a facial image to be detected into a pre-trained feature extraction model to obtain image features corresponding to the facial image to be detected, wherein the feature extraction model is used for extracting the features of the facial image; inputting the obtained image features into a generator in a pre-trained generation countermeasure network to obtain a generated image corresponding to the face image to be detected, wherein the generation countermeasure network comprises the generator and a discriminator, and the feature extraction model and the generation countermeasure network are obtained based on live face image set training; and generating a living body detection result corresponding to the face image to be detected based on the similarity between the face image to be detected and the obtained generated image.
In some embodiments, generating a living body detection result corresponding to the face image to be detected based on the similarity between the face image to be detected and the obtained generated image includes: determining whether the similarity between the face image to be detected and the obtained generated image is greater than a preset similarity threshold value; and in response to determining that the face is larger than the predetermined threshold, determining that the face in the face image to be detected is a live face.
In some embodiments, generating a living body detection result corresponding to the face image to be detected based on a similarity between the face image to be detected and the obtained generated image, further includes: and in response to determining that the face is not larger than the predetermined threshold, determining that the face in the face image to be detected is not a live face.
In some embodiments, the feature extraction model and the generation of the countermeasure network are trained by the following training steps: acquiring a living body face image set; for the living face images in the living face image set, executing the following parameter adjusting steps: inputting the living body face image into an initial feature extraction model to obtain image features corresponding to the living body face image; inputting the obtained image characteristics into an initial generator to obtain a generated face image; adjusting parameters of an initial feature extraction model and an initial generator based on the obtained similarity between the generated face image and the living body face image; respectively inputting the generated face image and the living body face image into an initial discriminator to obtain a first discrimination result and a second discrimination result, wherein the first discrimination result and the second discrimination result are respectively used for representing whether the generated face image and the living body face image are real face images or not; parameters of the initial feature extraction model, the initial generator, and the initial discriminator are adjusted based on a first difference between a first discrimination result and a discrimination result indicating whether an image input to the initial discriminator is a real face image or not, and a second difference between a second discrimination result and a discrimination result indicating that an image input to the initial discriminator is a real face image.
In some embodiments, before performing the following parameter adjusting step for the live face images in the live face image set, the training step further comprises: determining model structure information of the initial feature extraction model, network structure information of the initial generator and network structure information of the initial discriminator, and initializing model parameters of the initial feature extraction model, network parameters of the initial generator and network parameters of the initial discriminator.
In some embodiments, the feature extraction model is a convolutional neural network.
In a second aspect, an embodiment of the present application provides an apparatus for detecting a living body, the apparatus including: the acquisition unit is configured to acquire a face image to be detected; the characteristic extraction unit is configured to input the facial image to be detected into a pre-trained characteristic extraction model to obtain image characteristics corresponding to the facial image to be detected, wherein the characteristic extraction model is used for extracting the characteristics of the facial image; the image generation unit is configured to input the obtained image features into a generator in a pre-trained generation countermeasure network to obtain a generated image corresponding to the face image to be detected, wherein the generation countermeasure network comprises the generator and a discriminator, and the feature extraction model and the generation countermeasure network are obtained based on live face image set training; and the living body detection unit is configured for generating a living body detection result corresponding to the face image to be detected based on the similarity between the face image to be detected and the obtained generated image.
In some embodiments, the living body detecting unit includes: the first determining module is configured to determine whether the similarity between the face image to be detected and the obtained generated image is greater than a preset similarity threshold value; and the second determination module is used for responding to the determination result that the face in the face image to be detected is a living face.
In some embodiments, the living body detecting unit further includes: and the third determining module is used for responding to the fact that the face in the face image to be detected is not larger than the living body face.
In some embodiments, the feature extraction model and the generation of the countermeasure network are trained by the following training steps: acquiring a living body face image set; for the living face images in the living face image set, executing the following parameter adjusting steps: inputting the living body face image into an initial feature extraction model to obtain image features corresponding to the living body face image; inputting the obtained image characteristics into an initial generator to obtain a generated face image; adjusting parameters of an initial feature extraction model and an initial generator based on the obtained similarity between the generated face image and the living body face image; respectively inputting the generated face image and the living body face image into an initial discriminator to obtain a first discrimination result and a second discrimination result, wherein the first discrimination result and the second discrimination result are respectively used for representing whether the generated face image and the living body face image are real face images or not; parameters of the initial feature extraction model, the initial generator, and the initial discriminator are adjusted based on a first difference between a first discrimination result and a discrimination result indicating whether an image input to the initial discriminator is a real face image or not, and a second difference between a second discrimination result and a discrimination result indicating that an image input to the initial discriminator is a real face image.
In some embodiments, before performing the following parameter adjusting step for the live face images in the live face image set, the training step further comprises: determining model structure information of the initial feature extraction model, network structure information of the initial generator and network structure information of the initial discriminator, and initializing model parameters of the initial feature extraction model, network parameters of the initial generator and network parameters of the initial discriminator.
In some embodiments, the feature extraction model is a convolutional neural network.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any implementation manner of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the method and the device for detecting the living body, the image characteristics of the face image to be detected are extracted, the obtained image characteristics are input to the generator in the pre-trained generation countermeasure network, the generated image corresponding to the face image to be detected is obtained, and whether the face in the face image to be detected is the living body face or not is determined based on the similarity between the face image to be detected and the obtained generated image. In addition, compared with a video-based in-vivo detection method, the in-vivo detection method provided by the embodiment of the application only analyzes the face image, and the in-vivo detection speed is increased.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method for detecting a living subject according to the present application;
FIG. 3 is a flow diagram of one embodiment of training steps for training a feature extraction model and generating a countermeasure network in accordance with the present application;
FIG. 4 is a flow chart of yet another embodiment of a method for detecting a living subject according to the present application;
FIG. 5 is a schematic diagram of the structure of one embodiment of a device for detecting living organisms according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for detecting a living body or the apparatus for detecting a living body of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as an image acquisition application, an image processing application, a biopsy application, a search application, etc., may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices with display screens, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as a plurality of software or software modules (for example, to provide an image acquisition service or a biopsy service), or as a single software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a detection server that performs live body detection on a face image to be detected uploaded by the terminal apparatuses 101, 102, 103. The detection server may analyze and otherwise process the received data such as the face image to be detected, and feed back a processing result (e.g., a living body detection result) to the terminal device.
It should be noted that the method for detecting a living body provided in the embodiment of the present application is generally performed by the server 105, and accordingly, the apparatus for detecting a living body is generally disposed in the server 105.
It should be noted that the server 105 may locally store the facial image to be detected directly, and the server 105 may directly extract the local facial image to be detected for live body detection, in this case, the exemplary system architecture 100 may not include the terminal devices 101, 102, and 103 and the network 104.
It should also be noted that the terminal devices 101, 102, 103 may also be installed with a living body detection application, and the terminal devices 101, 102, 103 may also perform living body detection based on the face image to be detected, in which case, the method for detecting a living body may also be executed by the terminal devices 101, 102, 103, and accordingly, the apparatus for detecting a living body may also be installed in the terminal devices 101, 102, 103. At this point, the exemplary system architecture 100 may also not include the server 105 and the network 104.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (for example, to provide a liveness detection service), or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for detecting a living subject according to the present application is shown. The method for detecting a living body includes the steps of:
In the present embodiment, an execution subject (for example, a server shown in fig. 1) of the method for detecting a living body may acquire a face image to be detected.
Here, the facial image to be detected may be uploaded to the execution subject by a terminal device (e.g., terminal devices 101, 102, 103 shown in fig. 1) in communication connection with the execution subject (e.g., server shown in fig. 1) through a wired connection manner or a wireless connection manner. At this time, a camera may be mounted in a terminal device (for example, a mobile phone) which is in communication connection with the execution body. The terminal device can control a camera installed in the terminal device to shoot a face image of a user, and sends the shot face image to the execution main body. In this way, the execution subject may use the image received from the terminal device as the face image to be detected. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
The face image to be detected may also be locally stored by the execution main body. For example, when the execution subject is a terminal device, a camera may be mounted in the terminal device. The terminal equipment can control a camera installed in the terminal equipment to shoot the face image of the user, locally store the shot face image and acquire the locally stored face image as the face image to be detected.
In this embodiment, the executing entity (for example, the server shown in fig. 1) may input the image to be detected obtained in step 201 into a pre-trained feature extraction model, so as to obtain an image feature corresponding to the face image to be detected.
Here, the feature extraction model trained in advance may be various models for extracting image features. The image features may also be various features including, but not limited to, color features, texture features, two-dimensional shape features, two-dimensional spatial relationship features, three-dimensional shape features, three-dimensional spatial relationship features, facial features, shape features of five sense organs, position and scale features of five sense organs, and the like.
In some optional implementations of this embodiment, the feature extraction model may be a convolutional neural network. Here, a Convolutional Neural Network (CNN) may include at least one Convolutional layer, which may be used to extract image features, and at least one pooling layer, which may be used to Down-Sample input information. In practice, the convolutional neural network is a feed-forward neural network, and its artificial neurons can respond to a part of the surrounding cells within the coverage range, and have excellent performance on image processing, so that the convolutional neural network can be used for extracting the image features, which can be various basic elements (such as color, lines, textures and the like) of the image. The image features corresponding to the face image to be detected are used for representing the features in the face image to be detected, and meanwhile, dimension reduction is carried out on the face image to be detected, so that the later-period calculation amount is reduced. In practice, the convolutional neural network may further include an activation function layer, and the activation function layer performs nonlinear computation on the input information using various nonlinear activation functions (e.g., a ReLU (Rectified Linear Units) function, a Sigmoid function, etc.).
And 203, inputting the obtained image characteristics into a generator in a pre-trained generation countermeasure network to obtain a generated image corresponding to the face image to be detected.
In this embodiment, the executing subject of the method for detecting a living body may input the image features obtained in step 202 into a generator in a pre-trained generative countermeasure network, and obtain a generated image corresponding to the face image to be detected. Here, the above-described generation countermeasure network may include a generator for generating an image from image features and a discriminator for determining whether the image input to the discriminator is the generated image or a real image.
Here, the feature extraction model used in step 202 and the generative confrontation network used in step 203 may be trained based on a set of live face images. The living body face images in the living body face image set are all images obtained by shooting living body faces. Therefore, the feature extraction model and the generation countermeasure network can learn the features of the living body face image, not the features of the non-living body face image.
In some optional implementations of the present embodiment, the feature extraction model and the generation of the countermeasure network may be trained through a training step. Referring to fig. 3, fig. 3 shows a flow 300 of one embodiment of a training step for training a feature extraction model and generating an antagonistic network according to the present application, which may include the following steps 301 to 303:
Here, the execution subject of the training step may be the same as or different from the execution subject of the method for detecting a living body. If the two parameters are the same, the executing agent of the training step can store the model structure information of the trained feature extraction model and the parameter values of the model parameters in the local after the feature extraction model is obtained through training. If not, the executing agent of the training step may send the model structure information of the trained feature extraction model and the parameter values of the model parameters to the executing agent of the method for detecting a living body after the feature extraction model is trained.
Here, the subject of execution of the training step may acquire a set of living body face images, each of which is an image obtained by photographing a living body face, locally or from another electronic device network-connected to the subject of execution of the training step.
and a substep 3021, inputting the living body face image into an initial feature extraction model to obtain an image feature corresponding to the living body face image.
Here, the executing subject of the training step may input the living body face image into an initial feature extraction model, resulting in an image feature corresponding to the living body face image. Here, the initial feature extraction model may be a model for extracting the features of the face image, which is predetermined for training the feature extraction model, and the initial feature extraction model may be an untrained feature extraction model or an unfinished training feature extraction model.
Optionally, the executing agent of the training step may execute the following first initialization operation before executing step 302:
first, model structure information of an initial feature extraction model may be determined. It is to be understood that, since the initial feature extraction model may include various types of models for extracting image features, model structure information required to be determined is also different for different types of models for extracting image features. Alternatively, the initial feature extraction model may be a convolutional neural network. Since the convolutional neural network is a multi-layer neural network, each layer is composed of a plurality of two-dimensional planes, and each plane is composed of a plurality of independent neurons, it is necessary to determine which layers (e.g., convolutional layers, pooling layers, excitation function layers, etc.), the connection order relationship between layers, and which parameters (e.g., weight, bias, convolution step size) each layer includes, etc. the initial feature extraction model of the convolutional neural network type includes. Among other things, convolutional layers may be used to extract image features. For each convolution layer, it can be determined how many convolution kernels exist, the size of each convolution kernel, the weight of each neuron in each convolution kernel, the bias term corresponding to each convolution kernel, the step length between two adjacent convolutions, whether padding is needed, how many pixel points are padded, and the number value for padding (generally, the padding is 0), etc. While the pooling layer may be used to Down-Sample (Down Sample) the input information to compress the amount of data and parameters to reduce overfitting. For each pooling layer, a pooling method for that pooling layer may be determined (e.g., taking a region average or taking a region maximum). The excitation function layer is used for carrying out nonlinear calculation on input information. A specific excitation function may be determined for each excitation function layer. For example, the activation function may be a ReLU and various variants of ReLU activation functions, a Sigmoid function, a Tanh (hyperbolic tangent) function, a Maxout function, and so on. In practice, the convolutional neural network is a feed-forward neural network, and its artificial neurons can respond to a part of the surrounding cells within the coverage range, and have excellent performance on image processing, so that the convolutional neural network can be used for extracting the image features, which can be various basic elements (such as color, lines, textures and the like) of the image.
Alternatively, the initial Feature extraction Model may also be an Active Shape Model (ASM), a Principal Component Analysis (PCA) Model, an Independent Component Analysis (ICA) Model, a Linear Discriminant Analysis (LDA) Model, a Local Feature Analysis (LFA) Model, or the like, for extracting features of the face image. Correspondingly, the model structure information to be determined is different corresponding to different feature extraction models.
Model parameters of the initial feature extraction model may then be initialized. In practice, the model parameters of the initial feature extraction model may be initialized with a number of different small random numbers. The small random number is used for ensuring that the model does not enter a saturation state due to overlarge weight value, so that training fails, and the difference is used for ensuring that the model can be normally learned.
In practice, the image features corresponding to the living human face image obtained may be in the form of a feature map (feature vector) or a feature vector, because the specific models of the feature extraction models are different.
And a substep 3022 of inputting the obtained image features into an initial generator to obtain a generated face image.
In this embodiment, the executing subject of the first training step may input the image features obtained in sub-step 3021 into the initial generator to obtain a generated face image. Wherein the initial generator is a generator in the initially generated countermeasure network. Here, the initial generation countermeasure network may be a generation countermeasure network (GAN) including generators and discriminators, where the generators and discriminators in the initial generation countermeasure network are an initial generator and an initial discriminator, respectively, where the initial generator is configured to generate an image according to image features, and the initial discriminator is configured to determine whether an input image is a generated image or a real image. In practice, the initial generator and the initial arbiter can be various neural network models.
In some optional implementations of this embodiment, the executing subject of the training step may execute the following second initialization operation before executing step 302:
first, network configuration information for initially generating the counterparty network may be determined.
Here, since the initially generated countermeasure network includes an initial generator and an initial discriminator. Therefore, here, the execution subject of the training step may determine the network structure information of the initial generator, and determine the network structure information of the initial discriminator.
It is understood that the initial generator and the initial arbiter can be various neural networks, and for this purpose, it can be determined which neural network the initial generator and the initial arbiter are, respectively, including several layers of neurons, how many neurons there are in each layer, the connection order relationship between the neurons in each layer, which parameters each layer of neurons includes, the corresponding activation function type of each layer of neurons, and so on. It will be appreciated that the network structure information that needs to be determined is different for different neural network types.
Parameter values for network parameters of the initial generator and the initial arbiter in the initially generated counterpoise network may then be initialized. In practice, the various network parameters of the initial generator and the initial arbiter may be initialized with some different small random numbers. The small random number is used for ensuring that the network does not enter a saturation state due to overlarge weight value, so that training fails, and the different random numbers are used for ensuring that the network can normally learn.
And a substep 3023 of adjusting parameters of the initial feature extraction model and the initial generator based on the obtained similarity between the generated face image and the live face image.
In this embodiment, the executing subject of the training step may adjust the parameters of the initial feature extraction model and the initial generator based on the similarity between the generated face image obtained in sub-step 3022 and the living body face image. In practice, an objective function can be set by taking the obtained maximum similarity between the generated face image and the living body face image as a target, then a preset optimization algorithm is adopted to adjust the parameters of the initial feature extraction model and the initial generator so as to optimize the objective function, and the parameter adjusting step is finished under the condition that a first preset training finishing condition is met. For example, here, the first preset training end condition may include, but is not limited to: the training time exceeds the preset time length, the times of executing the parameter adjusting step exceeds the preset times, and the similarity between the generated face image and the living body face image is larger than a preset similarity threshold value.
Here, the preset optimization algorithm may include, but is not limited to, a Gradient Descent method (Gradient decision), a Newton's method, a Quasi-Newton method (Quasi-Newton Methods), a Conjugate Gradient method (Conjugate Gradient), a heuristic optimization method, and other various optimization algorithms now known or developed in the future. The similarity between two images can be calculated by various methods, for example, histogram matching, mathematical matrix decomposition (such as singular value decomposition and non-negative matrix decomposition), an image similarity calculation method based on feature points, and the like can be used.
And a substep 3024 of inputting the generated face image and the living body face image obtained into an initial discriminator to obtain a first discrimination result and a second discrimination result, respectively.
Here, the executing subject of the training step may input the generated image obtained in sub-step 3022 and the living body face image to the initial discriminator to obtain a first discrimination result and a second discrimination result, respectively. The initial discriminator is used for representing the corresponding relation between the face image and the discrimination result for representing whether the input face image is the real face image. The first discrimination result is a discrimination result output by the initial discriminator for the generated face image obtained in the substep 3022 input to the initial discriminator, and the first discrimination result is used to characterize whether the generated face image obtained in the substep 3022 is a real face image. The second judgment result is a judgment result output by the initial discriminator aiming at the living body face image input into the initial discriminator, and the second judgment result is used for representing whether the living body face image is a real face image. Here, the discrimination result output by the initial discriminator may be in various forms. For example, the discrimination result may be a discrimination result (e.g., a number 1 or a vector (1,0)) for characterizing that the face image is a real face image or a discrimination result (e.g., a number 0 or a vector (0,1)) for characterizing that the face image is not a real face image (i.e., a generated face image); for another example, the discrimination result may further include a probability for characterizing that the face image is a real face image and/or a probability for characterizing that the face image is not a real face image (i.e., the generated face image), for example, the discrimination result may be a vector including a first probability for characterizing that the face image is a real face image and a second probability for characterizing that the face image is not a real face image (i.e., the generated face image).
Sub-step 3025, adjusting parameters of the initial feature extraction model, the initial generator, and the initial discriminator based on the first difference and the second difference.
Here, the execution subject of the training step may first calculate the first difference and the second difference according to a preset loss function (e.g., L1 norm or L2 norm, etc.). Here, the first difference is a difference between the first discrimination result obtained in sub-step 3024 and a result of whether or not the image input to the initial discriminator is a real face image, and the second difference is a difference between the second discrimination result and a result of whether or not the image input to the initial discriminator is a real face image. It will be appreciated that the specific penalty functions may be different when the form of the discrimination results output by the initial generators is different.
Then, the executing agent of the first training step may adjust parameters of the initial feature extraction model, the initial generator, and the initial discriminator based on the calculated first difference and second difference, and end the parameter adjusting step when a second preset training end condition is satisfied. For example, here, the second preset training end condition may include, but is not limited to: the training time exceeds the preset time length, the times of executing the parameter adjusting step exceeds the preset times, and the difference between the first probability and the second probability obtained by calculation is smaller than a first preset difference threshold value.
Here, the parameters of the initial feature extraction model, the initial generator, and the initial discriminator may be adjusted based on the calculated first and second differences in various implementations. For example, the model parameters of the initial feature extraction model, the initial generator, and the initial discriminator may be adjusted using a BP (Back Propagation) algorithm or an SGD (Stochastic Gradient Descent).
Thus, by optimizing the initial feature extraction model, the initial generator and the initial discriminator for a plurality of times in step 302, the image features can be obtained after the face image is input into the initial feature extraction model, and the generated image obtained after the obtained image features are input into the initial generator is similar to the face image input into the initial feature extraction model, that is, the features in the living body face image are learned by the initial feature extraction model and the initial generator.
In this embodiment, the executing subject of the training step may determine the initial feature extraction model trained after step 302 as the feature extraction model, and determine the initial generator and the initial arbiter as the generator and the arbiter in the generation of the countermeasure network, respectively. Thus, using the training steps described in steps 301 to 303, a pre-trained feature extraction model can be obtained and a countermeasure network can be generated.
And 204, generating a living body detection result corresponding to the face image to be detected based on the similarity between the face image to be detected and the obtained generated image.
Because the feature extraction model and the generation countermeasure network are obtained based on the live body face image set training, the feature extraction model and the generation countermeasure network can only learn the features of the live body face image, but not the features of the non-live body face image. Therefore, if a face image to be detected obtained by photographing a living body face is input to the feature extraction model and the generator in the generation countermeasure network, the generated image is more similar to the face image to be detected. On the contrary, if the face image to be detected obtained by shooting the non-living face is input into the feature extraction model and the generator in the generation countermeasure network, the generated image face image to be detected is not very similar. That is, the higher the similarity between the face image to be detected and the obtained generated image is, the higher the possibility that the face in the face image to be detected is a living face is, and conversely, the lower the similarity between the face image to be detected and the obtained generated image is, the lower the possibility that the face in the face image to be detected is a living face is.
Based on the above understanding, in the present embodiment, the execution subject may generate the living body detection result corresponding to the face image to be detected based on the similarity between the face image to be detected and the obtained generated image. And the living body detection result is used for representing whether the face in the face image is a living body face. For example, the living body detection result may be a detection result identification (e.g., a number 1 or a vector (1,0)) for characterizing that the face in the face image is a living body face or a no detection result identification (e.g., a number 0 or a vector (0,1)) for characterizing that the face in the face image is not a living body face; for another example, the living body detection result may further include a probability that the face in the face image is a living body face and/or a probability that the face in the face image is a non-living body face, for example, the living body detection result may be a vector including a third probability and a fourth probability, where the third probability is used for representing the probability that the face in the face image is a living body face, and the fourth probability is used for representing the probability that the face in the face image is a non-living body face.
In some optional implementations of this embodiment, step 204 may be performed as follows:
first, the similarity between the face image to be detected and the resulting generated image can be calculated.
Here, the similarity between the two images may be calculated using various methods, for example, histogram matching, mathematical matrix decomposition (such as singular value decomposition and non-negative matrix decomposition), an image similarity calculation method based on feature points, and the like may be used.
Then, the probability value that the face image to be detected is the living face can be determined according to the calculated similarity.
For example, when the calculated similarity is a numerical value (which may be in the form of a decimal or percentage) between 0 and 1, the calculated similarity may be directly determined as a probability value that the face image to be detected is a living face.
For another example, when the calculated similarity is not a value between 0 and 1, the calculated similarity may be normalized to a value between 0 and 1 (for example, the calculated similarity may be divided by a preset value, or a Sigmoid function is used, or the like), and the normalized value is determined as a probability value that the face image to be detected is the live face.
And finally, taking the calculated probability value as a living body detection result corresponding to the face image to be detected.
In some optional implementations of this embodiment, step 204 may also be performed as follows:
first, the similarity between the face image to be detected and the resulting generated image can be calculated.
Secondly, the probability value that the face image to be detected is the living face can be determined according to the similarity obtained by calculation.
And thirdly, determining that the face image to be detected is the living face in response to the fact that the determined probability is larger than the preset probability value threshold.
And finally, determining that the face image to be detected is not the living face in response to the fact that the determined probability is not larger than the preset probability value threshold.
The method for detecting the living body provided by the above embodiment of the application extracts the image features of the face image to be detected, inputs the obtained image features to the generator in the pre-trained generative countermeasure network to obtain the generated image corresponding to the face image to be detected, and determines whether the face in the face image to be detected is the living body face based on the similarity between the face image to be detected and the obtained generated image. In addition, compared with a video-based in-vivo detection method, the in-vivo detection method provided by the embodiment of the application only analyzes the face image, and the in-vivo detection speed is increased.
With further reference to FIG. 4, a flow 400 of yet another embodiment of a method for detecting a living subject is shown. The process 400 of the method for detecting a living subject includes the steps of:
And 402, inputting the image to be detected into a pre-trained feature extraction model to obtain the image features corresponding to the face image to be detected.
And 403, inputting the obtained image features into a generator in a pre-trained generation countermeasure network to obtain a generated image corresponding to the face image to be detected.
In this embodiment, the specific operations of step 401, step 402, and step 403 are substantially the same as the operations of step 201, step 202, and step 403 in the embodiment shown in fig. 2, and are not described again here.
In the present embodiment, an execution subject (e.g., a server shown in fig. 1) of the method for detecting a living body may first calculate the similarity between a face image to be detected and a resultant generated image.
Then, the executing entity may determine whether the calculated similarity is greater than a preset similarity threshold, and if so, go to step 405, and if not, go to step 406.
Here, the preset similarity threshold may be set manually by a technician, or may be obtained based on statistical analysis of a large amount of sample data. For example, the process of statistical analysis may proceed as follows:
first, a sample face image set is obtained.
The sample face image set comprises a living body face image subset and a non-living body face image subset;
secondly, for each sample face image in the sample face image set, a similarity calculation step is performed. Here, the similarity calculation step may include:
firstly, inputting the sample face image into a pre-trained feature extraction model to obtain image features corresponding to the sample face image.
And secondly, inputting the obtained image characteristics into a generator in a pre-trained generation countermeasure network to obtain a generated image corresponding to the sample face image.
And a third section that calculates a similarity between the generated image and the sample face image.
And thirdly, determining the minimum value of the similarity between the generated image corresponding to each living body face image in the living body face image subset and the living body face image.
Then, the maximum value of the similarity between the generated image corresponding to each non-living body face image in the non-living body face image subset and the non-living body face image is determined.
And finally, taking the average value of the determined minimum value and the determined maximum value as a preset similarity threshold value.
In this embodiment, the executing entity may determine that the face in the face image to be detected is a living face in the case that it is determined in step 404 that the similarity between the face image to be detected and the obtained generated image is greater than the preset similarity threshold.
And 406, determining that the face in the face image to be detected is not a living face.
In this embodiment, the executing entity may determine that the face in the face image to be detected is not a living face in the case that it is determined in step 404 that the similarity between the face image to be detected and the obtained generated image is not greater than the preset similarity threshold.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for detecting a living body in this embodiment highlights the step of directly determining whether the face in the face image to be detected is a living body face according to the similarity between the face image to be detected and the obtained generated image. Therefore, the scheme described in the embodiment can reduce the computational complexity and further accelerate the speed of the living body detection.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present application provides an embodiment of an apparatus for detecting a living body, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for detecting a living body of the present embodiment includes: an acquisition unit 501, a feature extraction unit 502, an image generation unit 503, and a living body detection unit 504. The acquiring unit 501 is configured to acquire a face image to be detected; a feature extraction unit 502 configured to input the facial image to be detected into a pre-trained feature extraction model to obtain an image feature corresponding to the facial image to be detected, where the feature extraction model is used to extract features of the facial image; an image generating unit 503 configured to input the obtained image features into a generator in a pre-trained generation countermeasure network to obtain a generated image corresponding to the face image to be detected, wherein the generation countermeasure network includes the generator and a discriminator, and the feature extraction model and the generation countermeasure network are obtained by training based on a living body face image set; a living body detection unit 504 configured to generate a living body detection result corresponding to the face image to be detected based on the similarity between the face image to be detected and the obtained generated image.
In this embodiment, specific processes of the obtaining unit 501, the feature extracting unit 502, the image generating unit 503, and the living body detecting unit 504 of the apparatus 500 for detecting a living body and technical effects brought by the specific processes can respectively refer to the related descriptions of step 201, step 202, step 203, and step 204 in the corresponding embodiment of fig. 2, and are not repeated herein.
In some optional implementations of the present embodiment, the living body detection unit 504 may include: a first determining module 5041, configured to determine whether a similarity between the facial image to be detected and the obtained generated image is greater than a preset similarity threshold; a second determining module 5042, configured to determine that the face in the above-mentioned face image to be detected is a live face in response to determining that the face is larger than the predetermined size.
In some optional implementations of the present embodiment, the living body detection unit 504 may further include: a third determining module 5043, configured to determine that the face in the above-mentioned face image to be detected is not a live face in response to determining that the face is not larger than the predetermined size.
In some optional implementations of the embodiment, the feature extraction model and the generation countermeasure network may be trained through the following training steps: acquiring a living body face image set; for the living face images in the living face image set, executing the following parameter adjusting steps: inputting the living body face image into an initial feature extraction model to obtain image features corresponding to the living body face image; inputting the obtained image characteristics into an initial generator to obtain a generated face image; adjusting parameters of the initial feature extraction model and the initial generator based on the obtained similarity between the generated face image and the living body face image; respectively inputting the obtained generated face image and the living body face image into an initial discriminator to obtain a first discrimination result and a second discrimination result, wherein the first discrimination result and the second discrimination result are respectively used for representing whether the obtained generated face image and the living body face image are real face images or not; adjusting parameters of the initial feature extraction model, the initial generator, and the initial discriminator based on a first difference between the first discrimination result and a result of whether or not an image input to the initial discriminator is a real face image, and a second difference between the second discrimination result and a result of whether or not an image input to the initial discriminator is a real face image.
In some optional implementation manners of this embodiment, before performing the following parameter adjustment step on the living face image in the living face image set, the training step may further include: and determining model structure information of the initial feature extraction model, network structure information of the initial generator, and network structure information of the initial discriminator, and initializing model parameters of the initial feature extraction model, network parameters of the initial generator, and network parameters of the initial discriminator.
In some optional implementations of this embodiment, the feature extraction model may be a convolutional neural network.
It should be noted that, for details of implementation and technical effects of each unit in the apparatus for detecting a living body provided in the embodiments of the present application, reference may be made to descriptions of other embodiments in the present application, and details are not described herein again.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing the electronic device of an embodiment of the present application. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An Input/Output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: a storage portion 606 including a hard disk and the like; and a communication section 607 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 607 performs communication processing via a network such as the internet. Drivers 608 are also connected to the I/O interface 605 as needed. A removable medium 609 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 608 as necessary, so that a computer program read out therefrom is mounted into the storage section 606 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 607 and/or installed from the removable medium 609. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a feature extraction unit, an image generation unit, and a living body detection unit. The names of these units do not in some cases constitute a limitation on the unit itself, and for example, the acquisition unit may also be described as a "unit that acquires a face image to be detected".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring a human face image to be detected; inputting a facial image to be detected into a pre-trained feature extraction model to obtain image features corresponding to the facial image to be detected, wherein the feature extraction model is used for extracting the features of the facial image; inputting the obtained image features into a generator in a pre-trained generation countermeasure network to obtain a generated image corresponding to the face image to be detected, wherein the generation countermeasure network comprises the generator and a discriminator, and the feature extraction model and the generation countermeasure network are obtained based on live face image set training; and generating a living body detection result corresponding to the face image to be detected based on the similarity between the face image to be detected and the obtained generated image.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.
Claims (14)
1. A method for detecting a living body, comprising:
acquiring a human face image to be detected;
inputting the facial image to be detected into a pre-trained feature extraction model to obtain image features corresponding to the facial image to be detected, wherein the feature extraction model is used for extracting features of the facial image;
inputting the obtained image features into a generator in a pre-trained generation countermeasure network to obtain a generated image corresponding to the face image to be detected, wherein the generation countermeasure network comprises the generator and a discriminator, and the feature extraction model and the generation countermeasure network are obtained based on live face image set training;
and generating a living body detection result corresponding to the face image to be detected based on the similarity between the face image to be detected and the obtained generated image.
2. The method according to claim 1, wherein the generating a living body detection result corresponding to the face image to be detected based on the similarity between the face image to be detected and the obtained generated image comprises:
determining whether the similarity between the face image to be detected and the obtained generated image is greater than a preset similarity threshold value;
and in response to determining that the face is larger than the predetermined threshold, determining that the face in the face image to be detected is a living face.
3. The method according to claim 2, wherein the generating of the in-vivo detection result corresponding to the face image to be detected based on the similarity between the face image to be detected and the obtained generated image further comprises:
and in response to determining that the face is not larger than the predetermined threshold, determining that the face in the face image to be detected is not a living face.
4. The method of claim 1, wherein the feature extraction model and the generating a countermeasure network are trained by the training steps of:
acquiring a living body face image set;
for the living face images in the living face image set, executing the following parameter adjusting steps: inputting the living body face image into an initial feature extraction model to obtain image features corresponding to the living body face image; inputting the obtained image characteristics into an initial generator to obtain a generated face image; adjusting parameters of the initial feature extraction model and the initial generator based on the obtained similarity between the generated face image and the living body face image; respectively inputting the obtained generated face image and the living body face image into an initial discriminator to obtain a first discrimination result and a second discrimination result, wherein the first discrimination result and the second discrimination result are respectively used for representing whether the obtained generated face image and the living body face image are real face images or not; adjusting parameters of the initial feature extraction model, the initial generator, and the initial discriminator based on a first difference between the first discrimination result and a discrimination result indicating whether an image input to the initial discriminator is a real face image or not, and a second difference between the second discrimination result and a discrimination result indicating that an image input to the initial discriminator is a real face image;
determining the initial feature extraction model as the feature extraction model, and determining the initial generator and the initial arbiter as a generator and an arbiter in the generative confrontation network, respectively.
5. The method of claim 4, wherein, prior to performing the following referencing steps for live face images of the set of live face images, the training step further comprises:
determining model structure information of the initial feature extraction model, network structure information of the initial generator, and network structure information of the initial discriminator, and initializing model parameters of the initial feature extraction model, network parameters of the initial generator, and network parameters of the initial discriminator.
6. The method of any of claims 1-5, wherein the feature extraction model is a convolutional neural network.
7. An apparatus for detecting a living body, comprising:
the acquisition unit is configured to acquire a face image to be detected;
the feature extraction unit is configured to input the facial image to be detected into a pre-trained feature extraction model to obtain image features corresponding to the facial image to be detected, wherein the feature extraction model is used for extracting features of the facial image;
the image generation unit is configured to input the obtained image features into a generator in a pre-trained generation countermeasure network to obtain a generated image corresponding to the face image to be detected, wherein the generation countermeasure network comprises the generator and a discriminator, and the feature extraction model and the generation countermeasure network are obtained based on live face image set training;
and the living body detection unit is configured to generate a living body detection result corresponding to the face image to be detected based on the similarity between the face image to be detected and the obtained generated image.
8. The apparatus according to claim 7, wherein the living body detecting unit includes:
the first determining module is configured to determine whether the similarity between the face image to be detected and the obtained generated image is greater than a preset similarity threshold value;
and the second determination module is used for responding to the determination result that the face in the face image to be detected is a living body face.
9. The apparatus according to claim 8, wherein the living body detecting unit further includes:
and the third determining module is used for responding to the fact that the face is not larger than the determined face, and determining that the face in the face image to be detected is not a living body face.
10. The apparatus of claim 7, wherein the feature extraction model and the generating a countermeasure network are trained by training steps comprising:
acquiring a living body face image set;
for the living face images in the living face image set, executing the following parameter adjusting steps: inputting the living body face image into an initial feature extraction model to obtain image features corresponding to the living body face image; inputting the obtained image characteristics into an initial generator to obtain a generated face image; adjusting parameters of the initial feature extraction model and the initial generator based on the obtained similarity between the generated face image and the living body face image; respectively inputting the obtained generated face image and the living body face image into an initial discriminator to obtain a first discrimination result and a second discrimination result, wherein the first discrimination result and the second discrimination result are respectively used for representing whether the obtained generated face image and the living body face image are real face images or not; adjusting parameters of the initial feature extraction model, the initial generator, and the initial discriminator based on a first difference between the first discrimination result and a discrimination result indicating whether an image input to the initial discriminator is a real face image or not, and a second difference between the second discrimination result and a discrimination result indicating that an image input to the initial discriminator is a real face image;
determining the initial feature extraction model as the feature extraction model, and determining the initial generator and the initial arbiter as a generator and an arbiter in the generative confrontation network, respectively.
11. The apparatus of claim 10, wherein before performing the following referencing steps for live face images of the set of live face images, the training step further comprises:
determining model structure information of the initial feature extraction model, network structure information of the initial generator, and network structure information of the initial discriminator, and initializing model parameters of the initial feature extraction model, network parameters of the initial generator, and network parameters of the initial discriminator.
12. The apparatus of any of claims 7-11, wherein the feature extraction model is a convolutional neural network.
13. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-6.
14. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810259543.3A CN108537152B (en) | 2018-03-27 | 2018-03-27 | Method and apparatus for detecting living body |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810259543.3A CN108537152B (en) | 2018-03-27 | 2018-03-27 | Method and apparatus for detecting living body |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108537152A CN108537152A (en) | 2018-09-14 |
CN108537152B true CN108537152B (en) | 2022-01-25 |
Family
ID=63483752
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810259543.3A Active CN108537152B (en) | 2018-03-27 | 2018-03-27 | Method and apparatus for detecting living body |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108537152B (en) |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111008294B (en) * | 2018-10-08 | 2023-06-20 | 阿里巴巴集团控股有限公司 | Traffic image processing and image retrieval method and device |
CN109151443A (en) * | 2018-10-15 | 2019-01-04 | Oppo广东移动通信有限公司 | High degree of comfort three-dimensional video-frequency generation method, system and terminal device |
CN111145455A (en) * | 2018-11-06 | 2020-05-12 | 天地融科技股份有限公司 | Method and system for detecting face risk in surveillance video |
CN109635770A (en) * | 2018-12-20 | 2019-04-16 | 上海瑾盛通信科技有限公司 | Biopsy method, device, storage medium and electronic equipment |
CN109800730B (en) * | 2019-01-30 | 2022-03-08 | 北京字节跳动网络技术有限公司 | Method and device for generating head portrait generation model |
CN111753595A (en) * | 2019-03-29 | 2020-10-09 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, device, and storage medium |
CN110059624B (en) * | 2019-04-18 | 2021-10-08 | 北京字节跳动网络技术有限公司 | Method and apparatus for detecting living body |
CN110473137B (en) * | 2019-04-24 | 2021-09-14 | 华为技术有限公司 | Image processing method and device |
CN110298295A (en) * | 2019-06-26 | 2019-10-01 | 中国海洋大学 | Mobile terminal on-line study measure of supervision based on recognition of face |
CN110490076B (en) * | 2019-07-18 | 2024-03-01 | 平安科技(深圳)有限公司 | Living body detection method, living body detection device, computer equipment and storage medium |
CN112330526B (en) * | 2019-08-05 | 2024-02-09 | 深圳Tcl新技术有限公司 | Training method of face conversion model, storage medium and terminal equipment |
WO2021046773A1 (en) * | 2019-09-11 | 2021-03-18 | 深圳市汇顶科技股份有限公司 | Facial anti-counterfeiting detection method and apparatus, chip, electronic device and computer-readable medium |
CN110599487A (en) * | 2019-09-23 | 2019-12-20 | 北京海益同展信息科技有限公司 | Article detection method, apparatus and storage medium |
CN110941986B (en) * | 2019-10-10 | 2023-08-01 | 平安科技(深圳)有限公司 | Living body detection model training method, living body detection model training device, computer equipment and storage medium |
CN118840324A (en) * | 2019-12-19 | 2024-10-25 | 联想(北京)有限公司 | Detection method and electronic equipment |
CN111260545B (en) * | 2020-01-20 | 2023-06-20 | 北京百度网讯科技有限公司 | Method and device for generating image |
CN111275784B (en) * | 2020-01-20 | 2023-06-13 | 北京百度网讯科技有限公司 | Method and device for generating image |
CN111291730A (en) * | 2020-03-27 | 2020-06-16 | 深圳阜时科技有限公司 | Face anti-counterfeiting detection method, server and storage medium |
CN111553202B (en) * | 2020-04-08 | 2023-05-16 | 浙江大华技术股份有限公司 | Training method, detection method and device for neural network for living body detection |
CN111539903B (en) * | 2020-04-16 | 2023-04-07 | 北京百度网讯科技有限公司 | Method and device for training face image synthesis model |
CN111507262B (en) * | 2020-04-17 | 2023-12-08 | 北京百度网讯科技有限公司 | Method and apparatus for detecting living body |
CN113689527B (en) * | 2020-05-15 | 2024-02-20 | 武汉Tcl集团工业研究院有限公司 | Training method of face conversion model and face image conversion method |
CN111754596B (en) * | 2020-06-19 | 2023-09-19 | 北京灵汐科技有限公司 | Editing model generation method, device, equipment and medium for editing face image |
CN112633113B (en) * | 2020-12-17 | 2024-07-16 | 厦门大学 | Cross-camera human face living body detection method and system |
CN113033305B (en) * | 2021-02-21 | 2023-05-12 | 云南联合视觉科技有限公司 | Living body detection method, living body detection device, terminal equipment and storage medium |
CN113516107B (en) * | 2021-09-09 | 2022-02-15 | 浙江大华技术股份有限公司 | Image detection method |
CN116266419A (en) * | 2021-12-15 | 2023-06-20 | 腾讯科技(上海)有限公司 | Living body detection method and device and computer equipment |
CN114445918B (en) * | 2022-02-21 | 2024-09-20 | 支付宝(杭州)信息技术有限公司 | Living body detection method, device and equipment |
CN116070695B (en) * | 2023-04-03 | 2023-07-18 | 中国科学技术大学 | Training method of image detection model, image detection method and electronic equipment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8457367B1 (en) * | 2012-06-26 | 2013-06-04 | Google Inc. | Facial recognition |
CN105956572A (en) * | 2016-05-15 | 2016-09-21 | 北京工业大学 | In vivo face detection method based on convolutional neural network |
CN106203305A (en) * | 2016-06-30 | 2016-12-07 | 北京旷视科技有限公司 | Human face in-vivo detection method and device |
CN106997380A (en) * | 2017-03-21 | 2017-08-01 | 北京工业大学 | Imaging spectrum safe retrieving method based on DCGAN depth networks |
CN107066942A (en) * | 2017-03-03 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of living body faces recognition methods and system |
CN107423701A (en) * | 2017-07-17 | 2017-12-01 | 北京智慧眼科技股份有限公司 | The non-supervisory feature learning method and device of face based on production confrontation network |
CN107563355A (en) * | 2017-09-28 | 2018-01-09 | 哈尔滨工程大学 | Hyperspectral abnormity detection method based on generation confrontation network |
CN107563283A (en) * | 2017-07-26 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | Method, apparatus, equipment and the storage medium of generation attack sample |
CN107766820A (en) * | 2017-10-20 | 2018-03-06 | 北京小米移动软件有限公司 | Image classification method and device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105912986B (en) * | 2016-04-01 | 2019-06-07 | 北京旷视科技有限公司 | A kind of biopsy method and system |
CN107451510B (en) * | 2016-05-30 | 2023-07-21 | 北京旷视科技有限公司 | Living body detection method and living body detection system |
-
2018
- 2018-03-27 CN CN201810259543.3A patent/CN108537152B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8457367B1 (en) * | 2012-06-26 | 2013-06-04 | Google Inc. | Facial recognition |
CN105956572A (en) * | 2016-05-15 | 2016-09-21 | 北京工业大学 | In vivo face detection method based on convolutional neural network |
CN106203305A (en) * | 2016-06-30 | 2016-12-07 | 北京旷视科技有限公司 | Human face in-vivo detection method and device |
CN107066942A (en) * | 2017-03-03 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of living body faces recognition methods and system |
CN106997380A (en) * | 2017-03-21 | 2017-08-01 | 北京工业大学 | Imaging spectrum safe retrieving method based on DCGAN depth networks |
CN107423701A (en) * | 2017-07-17 | 2017-12-01 | 北京智慧眼科技股份有限公司 | The non-supervisory feature learning method and device of face based on production confrontation network |
CN107563283A (en) * | 2017-07-26 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | Method, apparatus, equipment and the storage medium of generation attack sample |
CN107563355A (en) * | 2017-09-28 | 2018-01-09 | 哈尔滨工程大学 | Hyperspectral abnormity detection method based on generation confrontation network |
CN107766820A (en) * | 2017-10-20 | 2018-03-06 | 北京小米移动软件有限公司 | Image classification method and device |
Non-Patent Citations (2)
Title |
---|
Learn Convolutional Neural Network for face Anti-Spoofing;Jianwei Yang et al;《arXiv》;20140826;1-8 * |
Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery;Thomas Schlegl et al;《arXiv》;20170317;1-12 * |
Also Published As
Publication number | Publication date |
---|---|
CN108537152A (en) | 2018-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537152B (en) | Method and apparatus for detecting living body | |
CN108416324B (en) | Method and apparatus for detecting living body | |
CN108416323B (en) | Method and device for recognizing human face | |
CN112446270B (en) | Training method of pedestrian re-recognition network, pedestrian re-recognition method and device | |
JP6798183B2 (en) | Image analyzer, image analysis method and program | |
WO2022134971A1 (en) | Noise reduction model training method and related apparatus | |
CN111598998A (en) | Three-dimensional virtual model reconstruction method and device, computer equipment and storage medium | |
CN112236779A (en) | Image processing method and image processing device based on convolutional neural network | |
US11087140B2 (en) | Information generating method and apparatus applied to terminal device | |
CN110222717A (en) | Image processing method and device | |
CN112492297B (en) | Video processing method and related equipment | |
CN111797882A (en) | Image classification method and device | |
CN108509994B (en) | Method and device for clustering character images | |
CN108241855B (en) | Image generation method and device | |
CN113191489A (en) | Training method of binary neural network model, image processing method and device | |
CN117115595B (en) | Training method and device of attitude estimation model, electronic equipment and storage medium | |
CN113179421B (en) | Video cover selection method and device, computer equipment and storage medium | |
CN114359289A (en) | Image processing method and related device | |
CN112529149B (en) | Data processing method and related device | |
CN111178187A (en) | Face recognition method and device based on convolutional neural network | |
CN108038473B (en) | Method and apparatus for outputting information | |
CN108399401B (en) | Method and device for detecting face image | |
CN113284055A (en) | Image processing method and device | |
CN111523593A (en) | Method and apparatus for analyzing medical images | |
WO2024188171A1 (en) | Image processing method and related device thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |