CN107992797B - Face recognition method and related device - Google Patents

Face recognition method and related device Download PDF

Info

Publication number
CN107992797B
CN107992797B CN201711064063.3A CN201711064063A CN107992797B CN 107992797 B CN107992797 B CN 107992797B CN 201711064063 A CN201711064063 A CN 201711064063A CN 107992797 B CN107992797 B CN 107992797B
Authority
CN
China
Prior art keywords
image
face
camera
detected
target person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711064063.3A
Other languages
Chinese (zh)
Other versions
CN107992797A (en
Inventor
李治农
杨李木
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Entropy Technology Co Ltd
Original Assignee
ZKTeco Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZKTeco Co Ltd filed Critical ZKTeco Co Ltd
Priority to CN201711064063.3A priority Critical patent/CN107992797B/en
Publication of CN107992797A publication Critical patent/CN107992797A/en
Application granted granted Critical
Publication of CN107992797B publication Critical patent/CN107992797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the invention discloses a face recognition method and a related device, which are applied to a channel gate comprising a camera and a projection device, wherein the projection device comprises a projection light source and a film, and a designated pattern is printed on the film, wherein: when the target person is in the identification area, acquiring a first image of the target person through a camera; determining the face position of the target person according to the first image; projecting the specified pattern to a face position through a projection light source, and acquiring a second image of the object to be detected corresponding to the face position through a camera; judging whether the object to be detected is a two-dimensional face image or a three-dimensional face object according to the second image; when the object to be detected is a three-dimensional face object, the projection light source is turned off, and a third image of the object to be detected is acquired through the camera; and performing face recognition on the third image, and allowing the target person to pass through the channel gate when the face recognition passes. The embodiment of the invention can improve the safety of face recognition.

Description

Face recognition method and related device
Technical Field
The invention relates to the technical field of electronics, in particular to a face recognition method and a related device.
Background
In recent years, with the development and progress of society, many closed management places need to identify people who enter an entrance and exit and apply the identification result to entrance guard management. The existing identification technologies in the market are more, and mainly include fingerprint identification, iris identification, face identification and the like, wherein the face identification is more suitable for the identification application of an entrance and an exit due to the characteristics of uniqueness, non-compulsory property, non-contact property and the like of the identification. However, the fake photo can still be imaged, the face recognition is performed by a method of two-dimensional fake feature points, and the pass gate is allowed when the feature point passes the verification, which results in low security.
Disclosure of Invention
The embodiment of the invention provides a face recognition method and a related device, which can improve the safety of face recognition.
In a first aspect, an embodiment of the present invention provides a face recognition method, which is applied to a channel gate including a camera and a projection device, where the projection device includes a projection light source and a film, and a designated pattern is printed on the film, where:
when a target person is in the identification area, acquiring a first image of the target person through the camera;
determining the face position of the target person according to the first image;
projecting the designated pattern to the face position through the projection light source, and acquiring a second image of the object to be detected corresponding to the face position through the camera;
judging whether the object to be detected is a two-dimensional face image or a three-dimensional face object according to the second image;
when the object to be detected is a three-dimensional face object, the projection light source is turned off, and a third image of the object to be detected is acquired through the camera;
and performing face recognition on the third image, and allowing the target person to pass through the channel gate when the face recognition passes.
In a second aspect, an embodiment of the present invention provides a channel gate, including a camera and a projection device, where the projection device includes a projection light source and a film sheet, and the film sheet is printed with a designated pattern, where:
the acquisition unit is used for acquiring a first image of a target person through the camera when the target person is in the identification area;
a determining unit, configured to determine a face position of the target person according to the first image;
the projection unit is used for projecting the specified pattern to the position of the human face through the projection light source;
the acquisition unit is also used for acquiring a second image of the object to be detected corresponding to the face position through the camera;
the judging unit is used for judging whether the object to be detected is a two-dimensional face image or a three-dimensional face object according to the second image;
the projection unit is further used for turning off the projection light source when the object to be detected is a three-dimensional human face object;
the acquisition unit is also used for acquiring a third image of the object to be detected through the camera;
and the processing unit is used for carrying out face recognition on the third image and allowing the target person to pass through the channel gate when the face recognition passes.
In a third aspect, the present invention provides a computer-readable storage medium, in which a computer program is stored, the computer program including program instructions, which, when executed by a processor, cause the processor to perform the method of the first aspect.
According to the embodiment of the invention, when the target person is in the identification area, the face position of the target person is determined through the first image acquired by the camera. And then, projecting the object to be measured at the face position by the projection device, and acquiring a second image of the object to be measured after projection by the camera. If the object to be detected is judged to be the three-dimensional face object according to the second image, namely the living face, the projection light source can be turned off, the third image of the object to be detected is collected through the camera, face recognition is carried out on the third image, and when the face recognition passes, the target person is allowed to pass through the channel gate. That is to say, the object to be detected is determined to be a three-dimensional face object and then face recognition is carried out, so that an illegal user is prevented from carrying out face recognition by using a two-dimensional face image, and the safety of verification carried out by the channel gate is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Wherein:
fig. 1 is a schematic view of a scene of a gateway for face recognition according to an embodiment of the present invention;
FIG. 1A is a schematic view of a scene for capturing a second image according to an embodiment of the present invention;
FIG. 1B is a schematic view of another scene for acquiring a second image according to an embodiment of the present invention;
fig. 1C is a schematic structural diagram of a camera and a projection apparatus according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a face recognition method according to an embodiment of the present invention;
fig. 2A is a schematic view of a scene where a camera acquires a second image according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a passage gate according to an embodiment of the present invention;
fig. 3A is a schematic structural diagram of a determining unit according to an embodiment of the present invention;
fig. 3B is a schematic structural diagram of a determining unit according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another tunnel gate according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
The embodiment of the invention provides a face recognition method and a related device, which can improve the safety of face recognition.
Referring to fig. 1, fig. 1 is a scene schematic diagram of a method for face recognition by a gateway according to an embodiment of the present invention. As shown in fig. 1, the aisle barrier 100 includes a camera 110 and a projector 120.
The passage gate 100 is a passage blocking device (passage management apparatus) for managing pedestrian flow and regulating the entrance and exit of pedestrians, and is mainly applied to a subway gate system and a toll ticket gate system. The most basic function is to realize that only one person passes through the gate at one time, thereby improving the safety of the verification of the gate.
The channel gate 100 includes a box, a blocking body, a core, a control module, and an auxiliary module. Wherein: the box body is used for protecting internal components such as the movement, the control module and the like and plays a supporting role; the barrier plays a role in blocking when the pedestrian is not allowed to pass through, and can be opened for passing through when the pedestrian is allowed to pass through, and the barrier is generally realized in the form of a door or a barrier rod; the movement is composed of various mechanical parts into a whole (including a driving motor, a speed reducer and the like), and the opening and closing actions of the blocking body are controlled by utilizing the mechanical principle; the control module utilizes microprocessor technology to realize the control of various electric components and driving motors, a simple control circuit can be realized by a main control board, a motor control board and an auxiliary control board generally, and a complex control circuit (such as a subway ticket checker) needs to be realized by configuring a special industrial personal computer; the auxiliary module comprises an LED indication module, a counting module, a pedestrian detection module, an alarm module, a permission input module, a voice prompt module, etc., and the camera 110 and the projection device 120 belong to the permission input module in the auxiliary module.
The projection device 120 includes a projection light source and a film, and the specific form of the projection light source and the film is not limited. Wherein, the projection light source can adopt laser emitted by a diode laser pen and the like; the film is used for printing plate making, is light, thin and small in size, is printed with a specified pattern for subsequent projection on an object, and the specified pattern can be selected from stripes or grids.
Structured Light (Structured Light) is an active optical measurement technology, a Structured Light measurement method is commonly used for measuring the three-dimensional profile of an object, a line laser source irradiates a Light plane generated by the line laser source on the surface of the object to generate Structured Light stripes, laser stripe information modulated by the object to be measured is acquired from a certain angle through a photoelectric sensor, measurement relates to the coordinates and phase calculation of a stripe image, and an adopted projection Light source image is generally calculated and designed by a computer and needs to process the Light stripe image in the Structured Light measurement through an algorithm. The method can simplify the device and the purpose of the process, and is more convenient to be applied to a non-contact biological identification device.
The face recognition method adopted in the embodiment of the present invention is based on structured light, that is, the line laser source is the projection device 120, the object surface is the face surface (the surface of the object to be detected), the projection light source image is a designated pattern printed on a film, and the photoelectric sensor is the camera 110.
It should be noted that, in the embodiment of the present invention, the projection device 120 is not required to measure the three-dimensional appearance of the object to be measured, but the projection light source is emitted to the object to be measured at the face position, that is, the specified pattern is projected onto the object to be measured, at this time, the camera 110 and the projection optical axis of the projection device 120 are projected in a direction with a certain angle, so as to obtain the deformed image generated on the surface of the object to be measured due to the height change of the surface of the object to be measured, and the three-dimensional feature of the object to be measured is determined by the deformed image, so as to determine whether the object to be measured is the two-dimensional face image or the three-dimensional face object.
For example, when the object to be measured is a two-dimensional face image, as shown in fig. 1A, the grid on the two-dimensional face image keeps a flat straight line; when the object to be measured is a three-dimensional face object, as shown in fig. 1B, the grid on the three-dimensional face object represents a deformed straight line.
In the embodiment of the invention, the camera 110 and the projection device 120 are fixedly installed on the box body of the passage gate 100, the angles between the camera 110 and the projection device 120 and the installation surface, the spacing distance between the camera 110 and the projection device 120, the projection visual angles of the camera 110 and the projection device 120, the identification distance of a target person and the like are not limited, the camera is convenient to be directly installed on the existing passage gate, and the camera for face identification is adopted, so that the cost is saved.
For example, assuming that the height of the gateway is 1.1m, as shown in fig. 1C, the projection device 120 includes a projection light source 121 and a film 122, the camera 110 and the projection device 120 are fixedly installed on the box of the gateway, the camera 110 and the installation surface form an angle of 32 °, the projection device 120 and the installation surface form an angle of 34 °, the center is 2cm away from the center of the camera 110, the camera 110 adopts a viewing angle of 60 °, and the projection angle of the projection device 120 is 60 °, and the identification distance is 50 cm. That is to say, in the case of executing the face recognition method provided by the embodiment of the present invention, when the target person enters the recognition area (50 cm), the camera can shoot a person with a height range of 1.45m to 1.9m, the face is located by the image collected by the camera 110, the projection light source 121 disposed 2cm below the camera 110 projects the specified image on the film 122 onto the face, so as to eliminate two-dimensional anti-counterfeiting attack and improve the safety of face recognition.
Referring to fig. 2, fig. 2 is a schematic flow chart of a face recognition method according to an embodiment of the present invention, as shown in fig. 2, the face recognition method is applied to the tunnel gate shown in fig. 1, wherein:
201: and when the target person is in the identification area, acquiring a first image of the target person through a camera.
202: and determining the face position of the target person according to the first image.
When the target person enters the identification area (for example, the distance between the target person and the gate of the gate is 50cm), the camera acquires the first image of the target person, and the face position of the target person is obtained by face positioning according to the first image, so that the power consumption of the camera is saved, and the accuracy of acquiring the face image by the camera is improved.
How to determine the position of the face from the first image is not limited herein, and the basic features of the face may be utilized, for example: determining a face image in the first image according to the matching degree of the standard face template or the sub-template of each facial feature and the first image, or distinguishing other parts in the first image according to the skin color particularity of the face, so as to extract the face image in the first image, and determining the face position corresponding to the face image according to the visual angle of the camera for shooting the first image; and a classifier for correctly identifying the human face and the non-human face samples can be established on a training sample set in a machine learning mode.
Optionally, extracting the face features in the first image to obtain a feature point set; and determining the face position according to the feature point set. That is to say, the position of the face contour or the position of the facial contour is determined by the feature point set in the first image, and then the face position is determined according to the positions.
203: the specified pattern is projected to the face position through the projection light source, and a second image of the object to be detected corresponding to the face position is collected through the camera.
When the projection device projects the designated pattern to the face position through the projection light source, the camera collects a second image of the object to be measured corresponding to the projected face position. Referring to fig. 2A, fig. 2A is a schematic view of a scene where a second image is acquired by a projection light source and a camera. As described above with respect to the structured light measurement method and fig. 2A, the specified pattern 122 is projected to the object to be measured at the face position by the projection light source 121 to generate a structured light stripe, and the camera 110 captures a second image of the object to be measured corresponding to the face position of the target person 10, where the second image includes laser stripe information modulated by the face position.
After the first image is collected and completed by the camera, the camera can be closed, and when the projection device projects the position of the face, the camera is opened, so that the power consumption of the camera is reduced.
Optionally, the viewing angle of the camera and the projection angle of the projection light source are adjusted according to the face position. It can be understood that the visual angle and the projection light source of the camera are adjusted respectively according to the position of the human face, and then the camera acquires a more accurate second image, so that the accuracy of judging the second image is improved.
204: and judging whether the object to be detected is a two-dimensional face image or a three-dimensional face object according to the second image.
The structured light measurement method can obtain a deformation image generated on the surface of the object to be measured by obtaining the height change of the surface of the object to be measured, and obtain the three-dimensional characteristics of the object to be measured by the deformation image of the surface of the object, so as to judge whether the object to be measured is a two-dimensional face image or a three-dimensional face object.
Optionally, performing image processing on the second image according to the designated pattern to obtain a grayscale image; extracting a deformation area according to the gray-scale image and a preset threshold value; generating a binary template of the deformation region; and if the binary template meets a preset condition, determining that the object to be detected is a three-dimensional face object.
In the process of acquiring the second image, under the influence of the surface characteristics of the object to be detected and the background light source, some parts of the surface of the object to be detected can become shadow areas; the light intensity of the abrupt change part on the surface of the object to be measured is larger, and the light intensity of some shielded areas is smaller. The local shadow area, the object discontinuity and the background area can be judged by utilizing the gray-scale image, the gray-scale value is calculated for each point, and the second image is subjected to image processing aiming at the designated image to acquire the corresponding gray-scale image as the second image is the image acquired during projection on the basis of the designated pattern.
The gray value can be calculated by fourier transform, as shown in equation (1), i.e., the sum of the squares of the real part and the imaginary part of the characteristic value is equal to the square of the gray value.
Figure BDA0001455371880000071
The nose bridge and the nose are located at the symmetrical center line of the face, the second image collected under the projection light has larger deformation, and the light intensity of the nose mutation part is larger for the face, so that the image of the nose part in the second image can be extracted, and if the image of the nose part is a deformed image and the characteristic points in the image meet the preset condition of the face nose, the object to be detected is determined to be a three-dimensional face object. Because the gray value in the gray image is small, a reasonable threshold (preset threshold) can be set to separate the gray image, the selection of the threshold can be determined through a histogram, and then a binary template is generated. And when the data distribution corresponding to the binary template deviates from a straight line, a preset condition is met, namely the variation intensity of the deformation region is high.
The central position of the face can be obtained by adopting Principal Component Analysis (PCA), Probability Principal Component Analysis (PPCA), gray level vertical integral projection or morphological feature description method and the like.
Wherein, the gray vertical integral projection uses a one-dimensional convolution factor to convolute the direction of the column of the figure matrix. According to a vertical integral projection formula, obtaining a response human face characteristic which is vertical integral projection, and finishing human face pre-positioning based on the vertical integral projection at a peak of an image shooting human face observation range; the structured light stripe on the human face generates obvious jump according to the obvious convex characteristic of the human nose, the stripe is obviously bent downwards (or bent when vertical light) compared with other stripes, the lowest point of the vertical integral projection is found out by integral projection in the vertical direction, and the central position of the human face is determined from the coordinate value of the lowest point of the area integral projection according to the structural characteristic of the symmetrical center of the human face.
205: and when the object to be detected is the three-dimensional face object, the projection light source is turned off, and a third image of the object to be detected is acquired through the camera.
206: and performing face recognition on the third image, and allowing the target person to pass through the passage gate when the face recognition passes.
When the object to be detected is judged to be the three-dimensional face object, the projection light source can be turned off, a third image at the face position is collected through the camera, face recognition is carried out on the third image, and the identity information of the target person is determined through the face recognition. And when the face recognition of the third image passes, the target person is allowed to pass through the channel gate, namely, the blocking body of the channel gate is controlled to be in a release state, and when the target person is detected to pass through the channel gate, the blocking body is controlled to be in a blocking state, so that one person can pass through the channel gate at a time, and the safety of the face recognition of the channel gate is improved. And when the object to be detected is judged to be the two-dimensional face image, the target person is rejected to pass through the channel gate, so that an illegal user is prevented from using the two-dimensional face image to perform face identification.
Optionally, when the face recognition of the third image passes, a timer starts to count time; and if the preset time is up and the target person is not detected to pass through the passage gate, controlling the obstruction body to be in the obstruction state. That is, when the face recognition of the target person passes, the access gate allows the target person to pass, but if the target person is not detected to pass within the preset time period, the blocking body is in a blocking state, that is, any person needs to perform the face recognition when passing through the access gate, so that the safety of the access gate is improved.
Optionally: extracting the face features in the third image to obtain a feature point set; dividing the feature point set of the third image into K regions, and obtaining the feature point distribution density of each region in the K regions to obtain K feature point distribution densities; obtaining the variance among the K distribution densities; and when the variance is in a preset range, carrying out face recognition on the third image according to the feature point set.
Wherein K is an integer greater than 1. The K regions may include facial regions, regions of five sense organs, or regions with concentrated feature points, etc. That is to say, when the variance between the K distribution densities is within the preset range, it is determined that the feature points in the feature point set meet the minimum requirement of image quality evaluation, that is, the face recognition of the third image can be performed on the feature point set. And if the variance is out of the preset range, the variance can be acquired again through the camera.
Before the face recognition is carried out on the third image, the image quality evaluation is carried out on the third image, and when the minimum requirement of the image quality evaluation is met, the face recognition is carried out according to the feature point set, so that the accuracy of the face recognition is improved.
According to the method, the feature point set of the first image is divided into K regions, when the acquired variance is within the preset range, the first image is subjected to face recognition according to the feature point set of the first image, and when the face recognition passes, the target person is allowed to pass through the channel gate. The overlong time caused by the fact that the camera collects the third image is avoided, the time of the whole face recognition process is shortened, and the processing efficiency of the channel gate is improved.
By implementing the embodiment, when the target person is in the recognition area, the face position of the target person is determined through the first image acquired by the camera. And then, projecting the object to be measured at the face position by the projection device, and acquiring a second image of the projected object to be measured by the camera. If the object to be detected is a three-dimensional face object, namely a living face, the projection light source can be turned off, a third image of the object to be detected is collected through the camera, the third image is verified, face recognition is carried out on the third image, and when the face recognition passes, a target person is allowed to pass through the channel gate. That is to say, the object to be detected is determined to be a three-dimensional face object and then face recognition is carried out, so that an illegal user is prevented from carrying out face recognition by using a two-dimensional face image, and the safety of verification carried out by the channel gate is improved.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a tunnel gate 300 according to an embodiment of the present invention, which is similar to the embodiment described in fig. 2. As shown in fig. 3, the aisle gate 300 includes a camera and a projection device, the projection device includes a projection light source and a film sheet, the film sheet is printed with a designated pattern, wherein:
the acquisition unit 301 is configured to acquire, by the camera, a first image of a target person when the target person is in the identification area.
A determining unit 302, configured to determine a face position of the target person according to the first image.
A projection unit 303, configured to project the specified pattern to the position of the human face through the projection light source.
The acquisition unit 301 is further configured to acquire, through the camera, a second image of the object to be detected corresponding to the face position.
A judging unit 304, configured to judge whether the object to be detected is a two-dimensional face image or a three-dimensional face object according to the second image.
The projection unit 303 is further configured to turn off the projection light source when the object to be detected is a three-dimensional human face object.
The acquisition unit 301 is further configured to acquire a third image of the object to be detected through the camera.
A processing unit 305, configured to perform face recognition on the third image, and allow the target person to pass through the gateway when the face recognition passes.
As a possible implementation, the aisle gate 300 further includes:
an adjusting unit 306, configured to adjust a viewing angle of the camera and a projection angle of the projection apparatus according to the face position.
As a possible implementation, as shown in fig. 3A, the determining unit 304 includes:
a decoding module 3041, configured to perform image processing on the second image according to the specified pattern, so as to obtain a histogram;
a first extracting module 3042, configured to extract a deformation region according to the histogram and a preset threshold;
a generating module 3043, configured to generate a binary template of the deformed region;
the first determining module 3044 is configured to determine that the object to be detected is a three-dimensional face object if the binary template meets a preset condition.
As a possible implementation, as shown in fig. 3B, the determining unit 302 includes:
a second extraction module 3021, configured to extract a face feature in the first image to obtain a feature point set;
a second determining module 3022, configured to determine the face position according to the feature point set.
As a possible implementation manner, the processing unit 306 is further configured to reject the target person from passing through the gateway when the object to be detected is a two-dimensional face image.
By implementing the embodiment of the invention, when the target person is in the identification area, the face position of the target person is determined through the first image acquired by the camera. And then, projecting the object to be measured at the face position by the projection device, and acquiring a second image of the projected object to be measured by the camera. If the object to be detected is a three-dimensional face object, namely a living face, the projection light source can be turned off, a third image of the object to be detected is collected through the camera, the third image is verified, face recognition is carried out on the third image, and when the face recognition passes, a target person is allowed to pass through the channel gate. That is to say, the object to be detected is determined to be a three-dimensional face object and then face recognition is carried out, so that an illegal user is prevented from carrying out face recognition by using a two-dimensional face image, and the safety of verification carried out by the channel gate is improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a passage gate according to an embodiment of the present invention. As shown in fig. 4, the lane gate in this embodiment may include a processor 410, a memory 420, a communication interface 430, and one or more programs 440, wherein the one or more programs 440 are stored in the memory 420 and configured to be executed by the processor 410, the programs 440 including instructions for performing the steps of: when a target person is in the identification area, acquiring a first image of the target person through a camera; determining the face position of the target person according to the first image; projecting a specified pattern to the face position through a projection light source, and acquiring a second image of the object to be detected corresponding to the face position through the camera; judging whether the object to be detected is a two-dimensional face image or a three-dimensional face object according to the second image; when the object to be detected is a three-dimensional face object, the projection light source is turned off, and a third image of the object to be detected is acquired through the camera; and performing face recognition on the third image, and allowing the target person to pass through the channel gate when the face recognition passes.
In a specific implementation, one or more programs 440 described in this embodiment of the present invention may execute the implementation described in the first embodiment of the face recognition method provided in this embodiment of the present invention.
In another embodiment of the present invention, a computer-readable storage medium is provided, which stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the implementation described in the first embodiment.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present invention, and these modifications or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (11)

1. The face recognition method is characterized by being applied to a channel gate comprising a camera and a projection device, wherein the projection device comprises a projection light source and a film, and a specified pattern is printed on the film, wherein:
when a target person is in the identification area, acquiring a first image of the target person through the camera;
determining the face position of the target person according to the first image;
projecting the designated pattern to the face position through the projection light source, and acquiring a second image of the object to be detected corresponding to the face position through the camera;
judging whether the object to be detected is a two-dimensional face image or a three-dimensional face object according to the second image;
when the object to be detected is a three-dimensional face object, the projection light source is turned off, and a third image of the object to be detected is acquired through the camera;
performing face recognition on the third image, and allowing the target person to pass through the channel gate when the face recognition passes;
wherein the performing face recognition on the third image includes:
extracting the face features in the third image to obtain a feature point set;
dividing the feature point set of the third image into K regions, and obtaining the feature point distribution density of each region in the K regions to obtain K feature point distribution densities;
obtaining the variance among the K distribution densities;
and when the variance is in a preset range, carrying out face recognition on the third image according to the feature point set.
2. The method of claim 1, wherein prior to said projecting the specified pattern to the face location by the projection light source, the method further comprises:
and adjusting the visual angle of the camera and the projection angle of the projection device according to the face position.
3. The method according to claim 1, wherein the determining whether the object to be detected is a two-dimensional face image or a three-dimensional face object according to the second image comprises:
performing image processing on the second image according to the specified pattern to obtain a histogram;
extracting a deformation area according to the histogram and a preset threshold value;
generating a binary template of the deformation region;
and if the binary template meets a preset condition, determining that the object to be detected is a three-dimensional face object.
4. The method of any of claims 1-3, wherein determining the face location of the target person from the first image comprises:
extracting the face features in the first image to obtain a feature point set;
and determining the face position according to the feature point set.
5. The method according to any one of claims 1-3, further comprising:
and when the object to be detected is the two-dimensional face image, rejecting the target person to pass through the channel gate.
6. The utility model provides a channel gate machine, its characterized in that includes camera and projection arrangement, projection arrangement includes projection light source and film, the last printing of film has appointed pattern, wherein:
the acquisition unit is used for acquiring a first image of a target person through the camera when the target person is in the identification area;
a determining unit, configured to determine a face position of the target person according to the first image;
the projection unit is used for projecting the specified pattern to the position of the human face through the projection light source;
the acquisition unit is also used for acquiring a second image of the object to be detected corresponding to the face position through the camera;
the judging unit is used for judging whether the object to be detected is a two-dimensional face image or a three-dimensional face object according to the second image;
the projection unit is further used for turning off the projection light source when the object to be detected is a three-dimensional human face object;
the acquisition unit is also used for acquiring a third image of the object to be detected through the camera;
the processing unit is used for carrying out face recognition on the third image and allowing the target person to pass through the channel gate when the face recognition is passed;
the processing unit is specifically used for extracting the face features in the third image to obtain a feature point set; dividing the feature point set of the third image into K regions, and obtaining the feature point distribution density of each region in the K regions to obtain K feature point distribution densities; obtaining the variance among the K distribution densities; and when the variance is in a preset range, carrying out face recognition on the third image according to the feature point set.
7. The aisle gate of claim 6, further comprising:
and the adjusting unit is used for adjusting the visual angle of the camera and the projection angle of the projection device according to the face position.
8. The tunnel gate according to claim 6, wherein the judging unit comprises:
the decoding module is used for carrying out image processing on the second image according to the specified pattern to obtain a histogram;
the first extraction module is used for extracting a deformation area according to the histogram and a preset threshold value;
the generating module is used for generating a binary template of the deformation region;
and the first determining module is used for determining that the object to be detected is a three-dimensional face object if the binary template meets a preset condition.
9. The aisle gate of any of claims 6-8, where the determination unit comprises:
the second extraction module is used for extracting the face features in the first image to obtain a feature point set;
and the second determining module is used for determining the face position according to the feature point set.
10. The gateway according to any of claims 6-8, wherein the processing unit is further configured to reject the target person from passing through the gateway when the object to be tested is a two-dimensional face image.
11. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method according to any of claims 1-5.
CN201711064063.3A 2017-11-02 2017-11-02 Face recognition method and related device Active CN107992797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711064063.3A CN107992797B (en) 2017-11-02 2017-11-02 Face recognition method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711064063.3A CN107992797B (en) 2017-11-02 2017-11-02 Face recognition method and related device

Publications (2)

Publication Number Publication Date
CN107992797A CN107992797A (en) 2018-05-04
CN107992797B true CN107992797B (en) 2022-02-08

Family

ID=62031233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711064063.3A Active CN107992797B (en) 2017-11-02 2017-11-02 Face recognition method and related device

Country Status (1)

Country Link
CN (1) CN107992797B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020058293A1 (en) * 2018-09-18 2020-03-26 Assa Abloy Ab Determining a matching list for biometric matching
CN109360310A (en) * 2018-10-11 2019-02-19 中控智慧科技股份有限公司 Biometric discrimination method, systems-on-a-chip and channel unit
CN109218587A (en) * 2018-10-29 2019-01-15 咪付(广西)网络技术有限公司 A kind of image-pickup method and system based on binocular camera
CN109711293A (en) * 2018-12-13 2019-05-03 佛山单常科技有限公司 A kind of gateway device based on recognition of face
CN109828231B (en) * 2019-02-26 2022-05-17 天津大学 Indoor flying light source positioning method based on LED
CN110390751A (en) * 2019-07-23 2019-10-29 中知互联(北京)技术有限公司 A kind of multi-functional barrier free accessibility gate
CN110555929B (en) * 2019-08-19 2020-08-14 北京戴纳实验科技有限公司 Laboratory entrance guard verification system and verification method
CN110751757A (en) * 2019-09-11 2020-02-04 河海大学 Unlocking method based on face image processing and intelligent lock
CN110837416B (en) * 2019-09-24 2021-04-30 深圳市火乐科技发展有限公司 Memory management method, intelligent projector and related product
CN111524242A (en) * 2020-04-15 2020-08-11 福建九天达信息科技股份有限公司 Ticket business system for swiping face to enter park in travel industry and use method thereof
CN111930249B (en) * 2020-07-21 2021-08-17 深圳市鹰硕教育服务有限公司 Intelligent pen image processing method and device and electronic equipment
CN111915779B (en) * 2020-07-31 2022-04-15 浙江大华技术股份有限公司 Gate control method, device, equipment and medium
CN112597886A (en) * 2020-12-22 2021-04-02 成都商汤科技有限公司 Ride fare evasion detection method and device, electronic equipment and storage medium
CN115512426B (en) * 2022-11-04 2023-03-24 安徽五域安全技术有限公司 Intelligent face recognition method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000311245A (en) * 1999-04-27 2000-11-07 Matsushita Electric Ind Co Ltd Method and edge discriminating device for circuit pattern
CN101777131A (en) * 2010-02-05 2010-07-14 西安电子科技大学 Method and device for identifying human face through double models
CN102254154A (en) * 2011-07-05 2011-11-23 南京大学 Method for authenticating human-face identity based on three-dimensional model reconstruction
CN104156720A (en) * 2014-07-26 2014-11-19 佳都新太科技股份有限公司 Face image denoising method on basis of noise evaluation model
CN105260731A (en) * 2015-11-25 2016-01-20 商汤集团有限公司 Human face living body detection system and method based on optical pulses
CN106469465A (en) * 2016-08-31 2017-03-01 深圳市唯特视科技有限公司 A kind of three-dimensional facial reconstruction method based on gray scale and depth information
CN106780810A (en) * 2016-12-06 2017-05-31 浙江水马环保科技有限公司 Work attendance method based on water purifier
CN107194325A (en) * 2017-04-28 2017-09-22 广东欧珀移动通信有限公司 Palmmprint acquisition method and Related product

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4734980B2 (en) * 2005-03-15 2011-07-27 オムロン株式会社 Face authentication device and control method therefor, electronic device equipped with face authentication device, face authentication device control program, and recording medium recording the program
CN101339607B (en) * 2008-08-15 2012-08-01 北京中星微电子有限公司 Human face recognition method and system, human face recognition model training method and system
CN104008362B (en) * 2013-02-27 2018-03-23 联想(北京)有限公司 A kind of method of information processing, know method for distinguishing and electronic equipment
US10198645B2 (en) * 2014-11-13 2019-02-05 Intel Corporation Preventing face-based authentication spoofing
CN105205455B (en) * 2015-08-31 2019-02-26 李岩 The in-vivo detection method and system of recognition of face on a kind of mobile platform
CN111611845A (en) * 2015-09-01 2020-09-01 北京上古视觉科技有限公司 Iris recognition system with living body detection function
CN105335722B (en) * 2015-10-30 2021-02-02 商汤集团有限公司 Detection system and method based on depth image information
CN106056724A (en) * 2016-07-12 2016-10-26 上海爽瑞智能化系统有限公司 Fingerprint and human face recognition composite security check gate
CN206162736U (en) * 2016-09-30 2017-05-10 深圳市商汤科技有限公司 Access control system based on face recognition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000311245A (en) * 1999-04-27 2000-11-07 Matsushita Electric Ind Co Ltd Method and edge discriminating device for circuit pattern
CN101777131A (en) * 2010-02-05 2010-07-14 西安电子科技大学 Method and device for identifying human face through double models
CN102254154A (en) * 2011-07-05 2011-11-23 南京大学 Method for authenticating human-face identity based on three-dimensional model reconstruction
CN104156720A (en) * 2014-07-26 2014-11-19 佳都新太科技股份有限公司 Face image denoising method on basis of noise evaluation model
CN105260731A (en) * 2015-11-25 2016-01-20 商汤集团有限公司 Human face living body detection system and method based on optical pulses
CN106469465A (en) * 2016-08-31 2017-03-01 深圳市唯特视科技有限公司 A kind of three-dimensional facial reconstruction method based on gray scale and depth information
CN106780810A (en) * 2016-12-06 2017-05-31 浙江水马环保科技有限公司 Work attendance method based on water purifier
CN107194325A (en) * 2017-04-28 2017-09-22 广东欧珀移动通信有限公司 Palmmprint acquisition method and Related product

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Implementation of optical correlator for face recognition applications";Fatma Bouzidi等;《 International Conference on Computer Vision and Image Analysis Applications》;20151210;正文第1-5页 *
"面向人脸识别的人脸活体检测方法研究";杨建伟;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20150415;第I138-1093页 *

Also Published As

Publication number Publication date
CN107992797A (en) 2018-05-04

Similar Documents

Publication Publication Date Title
CN107992797B (en) Face recognition method and related device
RU2431190C2 (en) Facial prominence recognition method and device
Sun et al. Improving iris recognition accuracy via cascaded classifiers
Steiner et al. Reliable face anti-spoofing using multispectral SWIR imaging
JP5955133B2 (en) Face image authentication device
US9489560B2 (en) On-the go touchless fingerprint scanner
CN108563999A (en) A kind of piece identity's recognition methods and device towards low quality video image
JP6222948B2 (en) Feature point extraction device
JP5955031B2 (en) Face image authentication device
WO2020079741A1 (en) Iris authentication device, iris authentication method, and recording medium
CN110458063A (en) The human face in-vivo detection method that anti-video, photo are cheated
KR100756047B1 (en) Apparatus for recognizing a biological face and method therefor
He et al. Robust eyelid, eyelash and shadow localization for iris recognition
CN106056080B (en) A kind of visual biometric information acquisition device and method
CN109446935A (en) A kind of iris locating method for iris recognition in advancing at a distance
US9607232B2 (en) Decision device provided to decide whether an eye is real or false
CN111368803A (en) Face recognition method and system
RU2315352C2 (en) Method and system for automatically finding three-dimensional images
Su et al. Automatic seal imprint verification systems using edge difference
Saparudin et al. Segmentation of fingerprint image based on gradient magnitude and coherence
Ohkawa et al. Fast Combined Separability Filter for Detecting Circular Objects.
WO2015123374A1 (en) On-the-go touchless fingerprint scanner
Wang et al. Liveness detection of dorsal hand vein based on the analysis of Fourier spectral
US10354121B2 (en) Method for verifying the veracity of a finger
KR100551775B1 (en) Discrimination method of imitative iris in iris recognition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: No.32, Pingshan Industrial Road, Tangxia Town, Dongguan City, Guangdong Province 523000

Patentee after: Entropy Technology Co.,Ltd.

Address before: No.26, 188 Industrial Avenue, Pingshan, Tangxia Town, Dongguan City, Guangdong Province, 523000

Patentee before: ZKTECO Co.,Ltd.

CP03 Change of name, title or address