WO2023229498A1 - Biometric identification system and method for biometric identification - Google Patents
Biometric identification system and method for biometric identification Download PDFInfo
- Publication number
- WO2023229498A1 WO2023229498A1 PCT/RU2023/050112 RU2023050112W WO2023229498A1 WO 2023229498 A1 WO2023229498 A1 WO 2023229498A1 RU 2023050112 W RU2023050112 W RU 2023050112W WO 2023229498 A1 WO2023229498 A1 WO 2023229498A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- infrared
- sources
- scene
- camera
- biological object
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 34
- 238000004458 analytical method Methods 0.000 claims description 23
- 238000010191 image analysis Methods 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 21
- 238000005286 illumination Methods 0.000 claims description 20
- 239000013598 vector Substances 0.000 claims description 17
- 230000005855 radiation Effects 0.000 claims description 15
- 230000010287 polarization Effects 0.000 claims description 14
- 230000008859 change Effects 0.000 claims description 12
- 210000004204 blood vessel Anatomy 0.000 claims description 8
- 230000036760 body temperature Effects 0.000 claims description 6
- 238000007726 management method Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000003860 storage Methods 0.000 claims description 3
- 238000013500 data storage Methods 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000012795 verification Methods 0.000 abstract description 2
- 238000001514 detection method Methods 0.000 description 7
- 230000005670 electromagnetic radiation Effects 0.000 description 7
- 238000000605 extraction Methods 0.000 description 7
- 239000000463 material Substances 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 230000001276 controlling effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000002093 peripheral effect Effects 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 210000000887 face Anatomy 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000017531 blood circulation Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000002792 vascular Effects 0.000 description 2
- 210000003462 vein Anatomy 0.000 description 2
- 208000035473 Communicable disease Diseases 0.000 description 1
- 241000124008 Mammalia Species 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000000149 argon plasma sintering Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000005672 electromagnetic field Effects 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000005541 medical transmission Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000010355 oscillation Effects 0.000 description 1
- 206010033675 panniculitis Diseases 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000033458 reproduction Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000007920 subcutaneous administration Methods 0.000 description 1
- 210000004304 subcutaneous tissue Anatomy 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 210000000216 zygoma Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/141—Control of illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/143—Sensing or illuminating at different wavelengths
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/14—Vascular patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Definitions
- the proposed group of inventions relates to video technology, in particular to devices for identifying biological objects. It can be utilized as part of access control and management systems or video surveillance systems that involve the identification of biological objects, primarily people, within the camera's field of view.
- the inventions can be used in places requiring verification of the objects’ identity, in particular, in the airports as part of a passport control system, at train stations, in the subway or other public transport, in educational institutions, clinics and medical centers.
- the system incorporates two light sources: the first illumination group emitting light of a certain intensity towards a person's face from the upper right or left part of the camera, and the second illumination group that emits light of a certain intensity towards the person's face from the lower part of the camera.
- the camera photographs and transmits the image of the person's face to an image processing unit. Then the feature data extracted from the face image is searched and matched with the feature template stored in the database, which allows determining their similarity.
- the offered non-contact biometric identification system includes a hand-held scanner that generates images of the user's palm. The scanner obtains a first set of one or more raw images that use infrared light with a first polarization and a second set of one or more raw images that use infrared light with a second polarization.
- the first group of images depicts external characteristics, such as lines and creases in the user's palm while the second group of images depicts internal anatomical structures, such as veins, etc. All the images in this set of images are then divided into sub-images. The sub-images are further processed to determine the feature vectors present in each sub-image. A current signature is determined using the feature vectors. The user may be identified based on a comparison of the current signature with a previously stored signature associated with the user’s identity.
- a disadvantage of this technical solution is the lack of adaptive brightness control, which limits the scanner’s ability to determine certain vital signs, such as blood flow.
- the lack of brightness adjustment allows reading only palm biometric data. In this connection, the palm should be scanned from a close distance, which extends the time required for recognition. Additionally, the direct- or close-contact method of biometric reading increases the likelihood of infectious disease transmission through touch-based devices.
- the existing biometric identification systems do not offer solutions for a number of technical problems.
- the technical solution offered by this invention allows creating a biometric image recognition system with a self-adapting automated control subunit for adjusting the level of illumination to ensure fast, reliable and accurate identification.
- the set task is solved in the manner described below.
- a unique biometric identification system and a biometric identification method are proposed.
- the first invention relates to improvement of the recognition system as such.
- the system includes sources of infrared radiation adapted to illuminate the scene; a camera capable of capturing the images of the scene illuminated by infrared light sources; and special means, which may be used to analyze the obtained images and to extract features of biological objects illuminated by the infrared sources at the point of capture.
- the system contains a subunit used to control the infrared sources, a number of polarizing and scattering means, with which the infrared sources are equipped, and a secondary camera configured to capture the images of the scene illuminated by the infrared sources.
- the infrared illumination system includes the first cluster of the first infrared emission sources and the second cluster of the second infrared emission sources.
- the control subunit allows analyzing the images obtained from the secondary camera. Based on the analysis results, the control subunit sends corresponding commands to the polarizing and scattering means.
- the system has common features with the analogue equipment: i.e. the sources of infrared radiation adapted to illuminate the visual field, a camera configured with a possibility to capture images illuminated by the infrared sources, and devices for the analysis of the obtained images and extraction of features of biological objects visible by the camera in the illuminated scene.
- the offered technical solution differs from its analogue since it contains special equipment for controlling the infrared sources, namely a control subunit, a group of polarizing and scattering filters, with which the infrared sources are equipped, and a secondary camera capable of capturing images of the field of view illuminated by the infrared sources.
- the infrared illumination system includes the first cluster of the first infrared emission sources and the second cluster of the second infrared emission sources, while the control subunit constitutes a part of the infrared radiation source control equipment and is designed to analyze images obtained by the secondary camera.
- the control subunit sends corresponding commands to polarizing and scattering means.
- the biological object is a person.
- the unit which analyses the image transmitted from the camera, incorporates a storage medium having computer executable instructions stored thereon, and a processor for executing the computer executable instructions.
- Both the first and second embodiments include the image analysis means capable of detecting a human face in the images received from the camera.
- the system additionally contains the first cluster of the first infrared emission sources consisting of LEDs with the wavelengths in the 800-890 nm range.
- the system contains the second cluster of the second infrared emission sources consisting of LEDs with the wavelengths in the 891-990 range.
- the system contains software and hardware configured to detect a biological object in the scene illuminated by the infrared sources.
- control subunit (being part of the infrared source control means) is configured to measure the exposure of the scene images obtained by the secondary camera.
- control subunit (being part of the infrared source control means) is configured to adjust the polarization and scattering of the infrared sources through sending corresponding signals to polarizing and scattering means installed on the infrared sources.
- control subunit (being part of the infrared source control equipment) is further configured to adjust polarization and scattering of the set of polarizing and scattering means based on the results of the analysis of images transmitted by the secondary camera.
- the system is connected with the access control and management equipment configured to operate on the basis of the data received from the analysis of the images taken by the camera.
- the system is equipped with audio and/or visual indicators, configured to generate messages and to inform about the identification results the person, whose identity has been established.
- the image analysis means are configured to measure the body temperature of a biological object.
- information about biological objects, who are undergoing and/or who have undergone a biometric identification procedure is sent to a remote information storage and processing unit.
- the system is configured to split the scene illuminated by the infrared sources into segments, so that each of them shall contain part of the whole image.
- the segments are exposed in such a way that it is possible to get the biological object image and to read its features, if there is a biological object in the specified area.
- system is further configured to subdivide the zones by their location - at least into near and far-distance views; with the foreground being illuminated by both the first cluster of the first infrared emission sources and the second cluster of the second infrared emission sources while the background being illuminated by the second cluster of the second infrared emission sources.
- the image processing means are based on cloud computing technology.
- the system contains several main interconnected modules: a camera, an illumination system consisting of the infrared sources, a subunit controlling their operation, and means for analyzing the images captured by the camera.
- a camera capable of shooting in the infrared range makes it possible to consistently obtain images of a biological object (a living beings) regardless of lighting conditions.
- the camera in this case, can be equipped with an infrared filter that blocks electromagnetic radiation in the visible part of the spectrum.
- infrared sources which allow obtaining images of a biological object with a clarity sufficient to extract its features.
- the infrared sources are practically invisible to the human eye, which allows the system to operate without additional discomfort for its users, since no bright light is directed onto the biological object. Besides, this allows covert surveillance, if such a task is set for the system.
- the design of the sets of polarizing and scattering means makes it possible to change the characteristics of the electromagnetic radiation coming from the infrared LEDs that illuminate a certain part of the scene.
- the secondary camera by continuously analyzing the scene being transmitted by the secondary camera, it is possible to provide the necessary illumination of all segments that comprise the whole scene.
- the biological object to be identified is a human being.
- the system may be adapted to recognize animals, in particular, mammals, or other similar creatures. This allows the system to be used, for example, in animal husbandry.
- the recognition and identification of people is a priority task for the system.
- the means for analyzing images received by the camera are in essence a hardware-software complex that includes a processor directly performing the necessary computational operations, a memory module for storing intermediate and final results, and software instructions that specify the calculation algorithms.
- the image analysis means are configured to detect a person's face in the image, it helps to simplify the subsequent extraction of the biological object features.
- the feature vector is obtained based on the analysis of the features of the biological object face. It means that for proper face analysis and subsequent feature extraction, it is advisable to determine the face location within the frame.
- special software means are applied which allow moving a sliding window around the whole frame, forming the vector which is sent to the classifier input. As a result, a conclusion is made whether there is or there is no face of a biological object within the frame.
- the infrared sources are grouped according to the lengths of the electromagnetic waves emitted by them, the system reliability has been increased, which reduces the number of false-positive recognitions in cases when the identified biological object is not a living person (for example, when photographs and videos of a living person, or various masks and other similar devices are used to circumvent the identification system).
- This technical solution offers the method of liveness detection (face anti-spoofing) based on the analysis of the real vascular pattern on the body of a live person being identified.
- the infrared sources used in this invention allow selecting the appropriate wavelengths, which may penetrate the subcutaneous tissues and which are absorbed by the blood. This helps to determine whether the identified object is alive or not.
- infrared rays with the wavelengths in the 891-900 nm range satisfy the specified criteria.
- the group of LEDs described here is designed to be used as an aid for illuminating the scene foreground. Thus, it becomes possible to reduce false positives and to protect against attempts to deliberately deceive or circumvent the system.
- the infrared rays with the wavelengths in the 800- 890 nm range make it possible to obtain a more detailed image of a biological object face in order to extract distinctive values from signal.
- This group of infrared LEDs is designed to be used as an aid for preeminent illumination of the scene foreground. Additionally, the said group of infrared LEDs may be used to illuminate the scene background, and in particular to detect a biological object (preferably its face) in the far plane of the scene.
- Additional means used to provide detection of a biological object in the illuminated scene make it possible to optimize the system operation since these means are activated only if there is a biological object. Additionally, the system power consumption is reduced.
- motion detectors and/or a thermal camera may be used.
- the operation of the polarizing and scattering means is regulated in compliance with the data obtained from the secondary camera.
- the key point is the proper illumination of the identified biological object so that it becomes possible to extract distinctive features characterizing the object from the image received from the primary camera.
- the image exposure is measured to subsequently determine the necessary corrective actions required to change the physical characteristics of the emitted rays, in particular, their polarization and scattering.
- the exposure can be measured for a certain part of the image and can be determined by calculating a mathematical average or a median value, or by measuring matrix or spot exposure.
- the secondary camera sends low resolution images to the control subunit at an increased frame rate (at least two times compared with the primary camera).
- the control subunit measures the illumination level of each image region and, based on the data obtained, corrects the command pulses to the lighting means and the sets of polarizing and scattering filters, thus ensuring the correct exposure of the images captured by the primary camera.
- the secondary and primary cameras are technically identical systems, differing in the image resolution and the frame rate.
- the system can be connected with an Access Control system (ACS) that provide admission to a certain territory on the basis of biometric authentication and biometric identification.
- Access control and management facilities are located at the checkpoints of enterprises or other institutions, or passport control cabins in airports, railway stations or bus terminals.
- the system is equipped with audio and/or visual aids that can be used to communicate the identification results or, additionally, the results of authentication.
- these devices may also instruct to make additional movements for correct identification, for example, “come closer”, “stand in the center”, etc.
- the LED indicator can spotlight the area on the floor, in which a person must stand to pass the identification procedure.
- any warm-blooded biological object is a source of thermal radiation that carries information about the temperature of this radiating or reflecting body.
- the system components perceive the specified information, and on its basis display the surface temperature of the biological object body.
- This function can be used to detect liveness of the object being identified or to allow or deny access to the territory, for example, if the body temperature of a biological object is above the normal values.
- the deployment of the image analysis means on a remote server allows receiving and processing data obtained from several devices. This optimizes computing power compared with the situation when each system is equipped with such means separately. Additionally, this increases the computing power allocated to processing, since instead of many separate computing devices, it is possible to use one server, the power of which is much greater than that of each individual image analysis tool.
- the captured scene is subdivided vertically and horizontally into quadrants, each of which form separate frame threads (multiple images). Also, the scene is split in perspective projection, at least into far and near planes, depending on the size of the illuminated area. If the scene dimensions do not allow normal exposure of the captured image, i.e., if it is not enough to split the scene only into fore- and background planes, a certain number of intermediate planes can be additionally introduced.
- the main criterion is the correct exposure of the objects present at the point of capture, namely, biological objects and their physical features.
- the scattering means can, if necessary, change the characteristics of the infrared radiation in such a way that the LED emission angle increases with a corresponding decrease in the distance of propagation of the infrared radiation.
- the near (foreground) plane of the scene image is illuminated by the first infrared emission sources.
- the second invention within the group is a method for biometric identification of biological objects, which includes a combination of the following operations: processing by the devices designed to analyze the scene images captured by the camera when the scene is illuminated by the infrared emission sources; detecting a biological object in the images being processed; extracting features characterizing the biological object; generating feature vectors (based on the extracted features); identifying a biological object in compliance with the obtained at least one feature vector (wherein, the scene being illuminated by the first cluster of the first infrared emission sources and the second cluster of the second infrared emission sources).
- the scene is split into image regions in a such way that each segment is illuminated by the first cluster of the first infrared emission sources and/or by the second cluster of the second infrared emission sources. This approach makes it possible to properly extract the features of a biological object which is detected in the scene illuminated by the infrared sources.
- the image analysis means are used to verify that the biological object manifests vital signs, i.e. blood vessels are seen.
- the image analysis means extract from the scene captured by the camera an image of the face, provided that the biological object is a living person.
- the infrared radiation is controlled by sending corresponding signals to the polarizing and scattering means, with which the infrared sources are equipped.
- the signals are transmitted from the unit controlling the infrared sources in compliance with the results of the analysis of the images obtained from the secondary camera.
- the captured scene illuminated by the infrared sources is split into at least fore- and background; and the images of different planes are divided into separate channels.
- the foreground is further subdivided into at least two channels.
- the first channel is formed by illuminating the foreground area of the scene with the first cluster of the first infrared emission sources
- the second channel is formed by illuminating the foreground area of the scene with the second cluster of the second infrared emission sources.
- the first channel is used to analyze the vital parameters of a biological object, while the second - to analyze the features of a biological object.
- the biological object is authenticated based on the results of its identification.
- an Access Control System is further controlled and operated based on the results of the biological object authentication.
- the key aspects of this method are: illumination of the scene by the first cluster of the first infrared emission sources and by the second cluster of the second infrared emission sources, splitting the scene into image planes, which are different in depth and location (foreground and background). Together these measures allow high-speed and accurate identification of a biological object, eliminating the possibility of falsifying the object's biometric data.
- Detection of blood vessels in a biological object makes it possible to increase the reliability of the system by protecting against unintentional or intentional actions of third parties to compromise the system through presenting false materials for analysis.
- the means designed for controlling the infrared sources allow sending command signals both directly to the infrared sources and to the polarizing and scattering means, thus providing an opportunity to independently adjust the illumination of the scene individual areas. This makes it possible to quickly adapt the backlight to changing lighting conditions affecting, among other things, the quality of the biological object image captured by the camera. In general, this increases the speed and accuracy of the system operation in terms of its ability to identify and recognize objects.
- the scene is split into the near and far planes (fore- and background). This allows a more accurate adjustment of the illuminated plane to changing conditions.
- An object may only be recognized and identified in the foreground, because the first cluster of the first infrared emission sources, which allows detecting blood vessels of a biological object, may function correctly at a relatively short distance from the camera lens.
- the camera sensor can provide the best conditions for transmission of the biometric features if a biological object is detected in the near (foreground) plane.
- a person's face may be recognized even if the person is in the mid-ground or in the far background, since the resolution of the camera sensor is sufficient to get the image and the task of identification using the image analysis means is not set.
- the scene segmenting into background and foreground allows running parallel processes and analyzing several frame threads simultaneously: while a biological object is being recognized and identified in the foreground, the background equipment may be focused on isolating a face or faces of one or more biological objects from the frame thread.
- the speed of system work is significantly increased.
- the identified biological object can be authenticated.
- the authenticator is a compiled feature vector.
- the value of the obtained authenticator is compared with the information about previously saved authenticators contained, for example, in image analysis means. If the values match, a script predefined for this event can be run.
- a command may be generated and sent to control equipment providing or limiting access to the area under surveillance. This allows using the offered method for ACSs (access control systems), since it ensures high speed operation, identification accuracy and a high degree of protection against hacking.
- FIG. 1 illustrates the general view of the biometric identification system, front view.
- FIG. 1 illustrates modules comprising the biometric identification system.
- Group of scattering means set of scattering electrochromic filters
- PoE power adapter Power over Ethernet
- biometrics within the framework of this technical solution, the authors mean the totality of the unique physiological features of a biological object, which allows its identification.
- a human face acts as a source of biometric data, namely the front part of the head, including the forehead, eyebrows, bridge of the nose, nose, eyes, cheeks, ears (in whole or in part), cheekbones, lips, mouth and chin.
- the location of blood vessels in the subcutaneous layer in a human face can serve as a source for biometric identification.
- identification or “identifying” mean a series of actions performed by an information system or a software-hardware complex to determine the identifier of the person who has passed the identification procedure. Moreover, this identifier should uniquely define the object in the information system. At the same time, based on the identification results, the obtained identifier can be compared with the list of identifiers previously assigned by the information system or the software-hardware complex.
- infrared source means a device or substance capable of emitting energy into the surrounding space in the form of electromagnetic radiation in the infrared part of the electromagnetic spectrum with the wavelengths in the 0.75 - 1000 ⁇ m range.
- an infrared LED may be treated as such a device.
- the term “scene” in this claim means an area in the surrounding space that is illuminated by the infrared sources and is captured by the camera's electromagnetic radiation detector. In other words, this is the spatial angle through which the camera detector is sensitive to electromagnetic radiation.
- the notion “camera” means an optical device capable of capturing the images of the surrounding space that fall into its field of view, i.e., a scene image.
- image analysis means shall be understood as complex of software and hardware means used to analyze the whole image or its individual parts captured by the camera with the purpose, for example, to detect a biological object at the point of capture and to further analyze the features of a biological object.
- feature extraction means preeminently the analysis of an image of the person’s face with the aim to create an identifier, based on the analysis results. It should be noted that the said identifier contains, probably in an encrypted form, the characteristics of the face of a biological object, in particular, the shape of the person’s face, nose, lips, eyes, etc.
- a biological object means a living being, primarily, a human being. In this sense, a biological object is not a picture, a hologram, a face mask or other reproductions of living beings, since the identification task is primarily aimed at determining unique characteristics of biological objects.
- the term “detected at the point of capture” means existing in a certain space, which is directly or through reflection illuminated by the infrared sources and which is captured by the camera.
- detection of a biological object for example, behind an obstacle, more or less opaque to electromagnetic radiation in the visible and/or infrared range, does not create sufficient conditions for the functioning of the system and use of the method.
- the detection of a biological object wholly or partially in the image of the scene captured by the camera shall be treated as “presence” in the true sense of this word within the framework of this invention.
- control subunit means an integral part of the whole biometric identification system, which sends commands in the form of electrical signals to the actuators (including polarizing and scattering means), while receiving and processing information coming from the secondary camera.
- the control subunit may consist of one or more electronic components, in particular, a processor, a programmable microcircuit or other similar units that are used to process data and to generate control commands.
- a group of polarizing and scattering means should be understood as a number of devices with which the clusters of the infrared sources are equipped and which are used to change the polarization and scattering angles of the rays passing through them.
- the first cluster of the first infrared emission sources means at least one LED emitting at least partially in the infrared region of the electromagnetic spectrum.
- the cluster may also consist of several infrared sources emitting in approximately the same infrared region of the electromagnetic spectrum. It is not necessary that all the infrared sources included in the first cluster of the first infrared emission sources emit at a wavelength exactly corresponding to the specified value, but the deviation may not exceed ⁇ 50 nm from the central emitted frequency.
- the term “the second cluster of the second infrared emission sources” shall be understood and treated in the same manner. The differences in the characteristics of the first and the second infrared radiation wavelengths shall be not less than 100 nm.
- image analysis means at least partial image processing by appropriate soft- and hardware (including pre-processing of the image coming from the camera) to bring it into a form suitable for subsequent recognition and analysis: in particular, to reduce digital noise, to clean the noise in the image using the 2D Fourier transform, to correct over- and underexposure of the image by brightness normalization, to eliminate the deviation of a position at which a human face is detected using a face detection frame and to remove compression artifacts.
- This term also includes the recognition of the object required for the identification (for example, a face), its direct analysis with the extraction of features characterizing the face of a biological object, and the formation of an identifier associated with the said face.
- commands are given means a signal sent by the control subunit to the polarizing and scattering means to change the state of the said polarizing and scattering means.
- the said state determines the characteristics of the infrared rays emitted by the infrared sources when these rays pass through the polarizing and scattering means.
- the signal may change either one or several characteristics of the infrared rays in terms of their polarization and/or scattering. It means that the signal may change the polarization and/or scattering of one part of the infrared rays emitted by the infrared source and illuminating a certain area (segment) of the scene.
- computer-executable instructions means a set of instructions for image processing and analysis in order to identify a biological object detected at the point of capture.
- detection (in particular, of a human face) shall be understood and treated as complete or partial presence a human face in the field of view of the camera on the scene illuminated by the infrared sources.
- the cluster of the infrared sources mean a number of independent infrared sources electrically connected and grouped together, for example, in several rows or columns (like a matrix structure), or in another geometric patterns: spirals, diagonals, etc.
- the word “detect” refers to the components of the system that make it possible to establish that a biological object, in particular, a person, is present, in whole or in part, in the area illuminated by the infrared sources in the camera's field of view.
- the term “measuring the image exposure” means a computer application used to determine the illuminance of an image or any part thereof. It should be noted that the exposure is primarily measured in the image region where the face of a biological object was previously detected for the subsequent extraction of its features.
- control (or command) signal shall be understood and interpreted as an electromagnetic pulse or a series of electromagnetic pulses that change the operation mode of a system component (in particular, polarizing or scattering means).
- Access control and management means include soft- and hardware that restrict access of people or other biological objects to the territory under supervision.
- the authorized persons may enter such territories through checkpoints, equipped with opening / closing facilities, allowing in various positions to prevent or open access for identified persons.
- body temperature measuring means determining the temperature of open surface areas of a human body using infrared rays emitted and captured by the corresponding components of the system.
- remote means of storing and processing information means soft- and hardware means, which analyze the images received from the camera and which are physically located outside the housing of a device or a system. Wired or wireless communication with these means can be provided over a local area network or over the Internet.
- splitting the scene into segments means subdividing the image captured by the camera into at least several sections located on planes with different depths in the physical space at which the camera is directed; and splitting each plane into sections along vertical and horizontal directions.
- the term “foreground” shall be understood and interpreted as part of the scene captured by the camera. This notion determines the distance (for example between 30 cm and 1 meter from the lens) at which a biological object should stand, depending on the conditional subdivision of the image space by depth. The said distance should be properly specified because it is required to adjust brightness of the LED backlight. The foreground distance is limited only by the minimum focusing distance of the camera lens.
- the “background” is also determined by its actual location in the scene, but at the same time the background distance from the lens is limited by the maximum brightness of the LEDs.
- the number of intermediate positions (mid-grounds) is selected based on the dynamic range of the camera used.
- it is possible to determine the distance from the device to the object if the plane of its location is known.
- the object’s exposition will be correct only if it is located on a certain plane (or in the transition area). On all other planes, the object will be either underexposed or overexposed.
- the term “cloud computing” shall be understood and interpreted as the spatial separation of image analysis means from the rest of the system components so that communication with the said means is provided through a network protocol.
- a feature vector means a feature description of an object, i.e., a vector composed of the values attributed to a certain set of features characterizing the images captured by the camera.
- vitamin signs shall be understood as the characteristics of a living person, such as: the vein pattern on the face and the body temperature corresponding to the normal body temperature of a human being.
- frame thread is understood as multiple shots of different planes of the same scene recorded by the camera in progressive mode or different planes illuminated by LEDs with different radiation frequencies.
- authentication means the procedure for verifying the person’s identity. In this case, the person’s biometric characteristics serve as an authentication factor.
- the system 1 consists of a housing 2, prismatic in shape, made mostly of plastic.
- LED matrices 3 installed in one row.
- the first cluster of the first infrared emission sources (LEDs 3) emits infrared rays with a wavelength of 830 nm
- the second cluster of the second infrared emission (LEDs 4) emits infrared radiation with a wavelength of 950 nm.
- the matrix module consists of electrically connected LEDs arranged in several rows and columns and mounted on a board. Each of these matrices in clusters 3, 4 is equipped with a number of polarizing 5 and scattering electrochromic 6 filters. These LED matrices and filters are part of the system, which is designed to illuminate the scene.
- the secondary 7 and the primary 8 cameras are installed, one under the other.
- the cameras are located is such a manner that their angle of view, which determines the scene being captured, is almost identical.
- the invention suggests using Omnivision® OV5647, OV2311, or Sony® IMX219 cameras.
- the Omnivision® OV2311 camera is used, as the most light-sensitive and capable of producing video at a high frame rate (120 or 240 frames per second) with a correct exposure.
- the Sony® IMX219 can capture ultra-high frame rate video (greater than 1000 fps).
- the camera can more accurately recognize emotions and the direction of a person's gaze.
- the cameras are equipped with infrared filters that cut off the visible light range.
- a light indicator 9 which allows feedback from the user, i.e. from a biological object being identified. shows that on the back side of the housing 2, there is a sound indicator 10, which works together with the light indicator, an antenna 11 of the Wi-Fi®/Bluetooth® wireless interface and an interface for wired connection to the Internet 12.
- a backlight control subunit 14 connected to the clusters of infrared LEDs 3, 4, the primary 8 and secondary 7 cameras, polarizing 5 and scattering 6 filters.
- the device works as follows.
- the peripheral controller 15 contains: a microcontroller; a set of linear and switching regulators; power switching units of different devices; a dry contact input for peripheral equipment; an open-collector transistor circuit and a digital interface; a backlight controller with a distance sensor; a lighting module; and a primary camera.
- the distance sensor measures the distance to the first object in the sensor's field of view.
- the field of view of the sensor coincides with the viewing angle of the primary camera 8.
- the peripheral controller 15 generates a control signal for the backlight control subunit 14, and turns on the scene illumination with two clusters of LEDs 3 and 4.
- the primary camera 8 captures the image of the scene illuminated by LEDs 3 and 4.
- the captured image is transmitted to the computing module 13, which performs its preliminary processing (radial distortion correction, exposure compensation, correction of the white and black levels). Noise is removed from the image by frequency processing.
- the resultant video file formed of the images obtained at the previous stages is encoded with H.264 compression.
- the computing module 13 detects a person's face moving a 64 x 64 pixel sliding window around the area.
- the gradient of every pixel in the image is calculated combining the horizontal and vertical gradient approximations and using the classical combination of convolution kernels (-1,0,1) and [(-1,0,1)] ⁇ T. Movement from left to right produces a horizontal gradient, and movement from top to bottom produces a vertical gradient. After the gradients are obtained, the gradient magnitude and gradient angle are calculated for each of the 64 pixels. A histogram of magnitudes and angles which is essentially a vector of 9 bins is constructed with 20 degrees increments. The output is transmitted to a fully connected neural network for classification. The classifier output is a real number in the interval [0, 1], where 0 means “no face found”, and 1 means “face detected”. At the next stage, the detected faces are extracted for processing: the image is frame-by-frame cropped to the size of the face and normalized by the angle of rotation. Then the analysis means through the network with the CNNResNet34 architecture extract the features of the biological object.
- the infrared rays provide a uniform illumination of a biological object to increase the accuracy of its recognition and identification.
- the system works as follows.
- the controller 15, sends appropriate command signals to matrix filters 5,6.
- This allows splitting the illuminated space into, at least, four areas both along the horizontal and vertical directions.
- the image depth is subdivided into fore- and background by controlling the brightness of the backlight LEDs 3, 4.
- intermediate shots can be added (mid-grounds).
- the maximum number of shots is determined by the number and maximum brightness of the backlight LEDs 3, 4.
- the maximum overall dimensions of the illuminated scene are limited by the scattering angle of the LEDs 3, 4, the viewing angles of camera lenses 7, 8 and their maximum resolution.
- the minimum and maximum distance to the object at the point of capture is limited by the focal length of a camera.
- the direction and degree of light scattering from the backlight LEDs 3, 4 is controlled by the matrix filters 5,6.
- the foreground is illuminated with 940 nm LEDs 4 to reveal the pattern of blood vessels in a human body. Additionally, the foreground is illuminated by the LEDs 3 with a wavelength of 850 nm. This allows obtaining a detailed image of a person's face. All other planes, including the background, are illuminated by the LEDs 3 with a wavelength of 850 nm. The mid- and background planes serve to trace a person's face in the frame thread.
- the primary camera 8 used for face identification is close in parameters or identical to the secondary camera 7 used to determine the backlight brightness in different zones. The image transmitted for processing from the secondary camera 7 comes almost at double the frame rate and in a lower resolution than the images sent from the primary camera 8.
- the secondary camera 7 measures the level of illumination of each of the scene segment. Based on the received data, the controller 15 corrects the command pulse sent to the backlight LEDs 3, 4 and the matrix filters 5,6, achieving the correct exposure of the frame threads transmitted by the primary camera 8.
- the video from the primary camera 8 serves to obtain an image of a person's face.
- the video is divided into several frame threads, the number of which is determined by the number of planes and by the operating modes of the backlight LEDs. Each frame thread corresponds to a specific illuminated plane. In different frame threads, the foreground is illuminated by LEDs with wavelengths of 850 nm and 940 nm.
- the vascular pattern on a human body is detected to confirm that the object being identified is a living person.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Ophthalmology & Optometry (AREA)
- Software Systems (AREA)
- Vascular Medicine (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Collating Specific Patterns (AREA)
Abstract
The proposed group of inventions relates to video technology, in particular to devices for identifying biological objects and can be used as part of access control and management systems or video surveillance systems that involve the identification of biological objects, primarily people, within the camera's field of view. The inventions can be used in places requiring verification of the objects' identity, in particular, in the airports as part of a passport control system, at train stations, in the subway or other public transport, in educational institutions, clinics and medical centers. The system includes infrared sources adapted to illuminate the scene; a camera capable of capturing images within the field of view illuminated by the infrared light sources; special means, which may be applied to analyze the obtained images and to extract features of a biological object present in the scene illuminated by the infrared sources; a subunit used to control the infrared sources; a set of polarizing and scattering means, with which the infrared sources are equipped; and a secondary camera for capturing the scene images illuminated by the infrared sources).
Description
The proposed group of inventions relates to video technology, in particular to devices for identifying biological objects. It can be utilized as part of access control and management systems or video surveillance systems that involve the identification of biological objects, primarily people, within the camera's field of view. The inventions can be used in places requiring verification of the objects’ identity, in particular, in the airports as part of a passport control system, at train stations, in the subway or other public transport, in educational institutions, clinics and medical centers.
Prior art available in the public domain generally describes the biometric identification systems based on the analysis of images captured by video and/or photo cameras. Thus, the European Union Patent for Invention # 1136937 “Facial image forming recognition apparatus and a pass control apparatus” (IPC G06K 9/20G06K 9/00G07C 9/00, TOSHIBA KK, Japan, Application # 01106961 dated 03/20/2001, Priority 03/22/2000, Publication 09/26/2001) discloses the technique, in which the facial recognition system is based on feature vectors extracted from profile silhouettes. The system incorporates two light sources: the first illumination group emitting light of a certain intensity towards a person's face from the upper right or left part of the camera, and the second illumination group that emits light of a certain intensity towards the person's face from the lower part of the camera. The camera photographs and transmits the image of the person's face to an image processing unit. Then the feature data extracted from the face image is searched and matched with the feature template stored in the database, which allows determining their similarity.
The shortcomings of this system are as follows: first of all, it works at close range, which increases the time required for recognition; while the use of single wavelength light sources does not allow determining a number of human features, for example, the pattern of blood vessels or blood flow, which increases the risk of circumventing the system. Additionally, the lack of adaptive brightness control reduces the range of the device capabilities, and the long-focus (narrow-angle) camera lens severely limits (crops) the field of view thus challenging the system’s ability to recognize human faces.
Another recognized solution is described in EU Patent for Invention # 3811289 “Non-contact biometric identification system” (IPC G06F21 / 32; G06K9 / 00; G06K9 / 20; applicant Amazon Tech Inc, USA, Application # 19851091 dated 06/20/2019, Priority 06/21/2018, Publication 04/28/2021). According to the description of the technical solution, the offered non-contact biometric identification system includes a hand-held scanner that generates images of the user's palm. The scanner obtains a first set of one or more raw images that use infrared light with a first polarization and a second set of one or more raw images that use infrared light with a second polarization. The first group of images depicts external characteristics, such as lines and creases in the user's palm while the second group of images depicts internal anatomical structures, such as veins, etc. All the images in this set of images are then divided into sub-images. The sub-images are further processed to determine the feature vectors present in each sub-image. A current signature is determined using the feature vectors. The user may be identified based on a comparison of the current signature with a previously stored signature associated with the user’s identity.
A disadvantage of this technical solution is the lack of adaptive brightness control, which limits the scanner’s ability to determine certain vital signs, such as blood flow. The lack of brightness adjustment allows reading only palm biometric data. In this connection, the palm should be scanned from a close distance, which extends the time required for recognition. Additionally, the direct- or close-contact method of biometric reading increases the likelihood of infectious disease transmission through touch-based devices.
Thus, the existing biometric identification systems do not offer solutions for a number of technical problems. At the same time, the technical solution offered by this invention allows creating a biometric image recognition system with a self-adapting automated control subunit for adjusting the level of illumination to ensure fast, reliable and accurate identification.
The set task is solved in the manner described below. A unique biometric identification system and a biometric identification method are proposed. The first invention relates to improvement of the recognition system as such. According to the first invention, the system includes sources of infrared radiation adapted to illuminate the scene; a camera capable of capturing the images of the scene illuminated by infrared light sources; and special means, which may be used to analyze the obtained images and to extract features of biological objects illuminated by the infrared sources at the point of capture. The system contains a subunit used to control the infrared sources, a number of polarizing and scattering means, with which the infrared sources are equipped, and a secondary camera configured to capture the images of the scene illuminated by the infrared sources. The infrared illumination system includes the first cluster of the first infrared emission sources and the second cluster of the second infrared emission sources. The control subunit allows analyzing the images obtained from the secondary camera. Based on the analysis results, the control subunit sends corresponding commands to the polarizing and scattering means.
The system has common features with the analogue equipment: i.e. the sources of infrared radiation adapted to illuminate the visual field, a camera configured with a possibility to capture images illuminated by the infrared sources, and devices for the analysis of the obtained images and extraction of features of biological objects visible by the camera in the illuminated scene.
However, the offered technical solution differs from its analogue since it contains special equipment for controlling the infrared sources, namely a control subunit, a group of polarizing and scattering filters, with which the infrared sources are equipped, and a secondary camera capable of capturing images of the field of view illuminated by the infrared sources. Besides, the infrared illumination system includes the first cluster of the first infrared emission sources and the second cluster of the second infrared emission sources, while the control subunit constitutes a part of the infrared radiation source control equipment and is designed to analyze images obtained by the secondary camera. Moreover, based on the analysis results the control subunit sends corresponding commands to polarizing and scattering means.
In the first embodiment of the invention, the biological object is a person.
In the second embodiment of the invention, the unit, which analyses the image transmitted from the camera, incorporates a storage medium having computer executable instructions stored thereon, and a processor for executing the computer executable instructions.
Both the first and second embodiments include the image analysis means capable of detecting a human face in the images received from the camera.
In the third embodiment, the system additionally contains the first cluster of the first infrared emission sources consisting of LEDs with the wavelengths in the 800-890 nm range.
In the fourth embodiment, the system contains the second cluster of the second infrared emission sources consisting of LEDs with the wavelengths in the 891-990 range.
In the fifth embodiment, the system contains software and hardware configured to detect a biological object in the scene illuminated by the infrared sources.
In the sixth embodiment, the control subunit (being part of the infrared source control means) is configured to measure the exposure of the scene images obtained by the secondary camera.
In the seventh embodiment, the control subunit (being part of the infrared source control means) is configured to adjust the polarization and scattering of the infrared sources through sending corresponding signals to polarizing and scattering means installed on the infrared sources.
Besides, in the sixth and seventh embodiments, the control subunit (being part of the infrared source control equipment) is further configured to adjust polarization and scattering of the set of polarizing and scattering means based on the results of the analysis of images transmitted by the secondary camera.
In the eighth embodiment, the system is connected with the access control and management equipment configured to operate on the basis of the data received from the analysis of the images taken by the camera.
In another version of the first embodiment, the system is equipped with audio and/or visual indicators, configured to generate messages and to inform about the identification results the person, whose identity has been established.
In the ninth embodiment, the image analysis means are configured to measure the body temperature of a biological object.
In the tenth embodiment, information about biological objects, who are undergoing and/or who have undergone a biometric identification procedure, is sent to a remote information storage and processing unit.
In the eleventh embodiment, the system is configured to split the scene illuminated by the infrared sources into segments, so that each of them shall contain part of the whole image. The segments are exposed in such a way that it is possible to get the biological object image and to read its features, if there is a biological object in the specified area.
In another version of the eleventh embodiment, the system is further configured to subdivide the zones by their location - at least into near and far-distance views; with the foreground being illuminated by both the first cluster of the first infrared emission sources and the second cluster of the second infrared emission sources while the background being illuminated by the second cluster of the second infrared emission sources.
In the twelfth embodiment, the image processing means are based on cloud computing technology.
In the most general version, the system contains several main interconnected modules: a camera, an illumination system consisting of the infrared sources, a subunit controlling their operation, and means for analyzing the images captured by the camera. The availability of a camera capable of shooting in the infrared range makes it possible to consistently obtain images of a biological object (a living beings) regardless of lighting conditions. The camera, in this case, can be equipped with an infrared filter that blocks electromagnetic radiation in the visible part of the spectrum. To identify biological objects in conditions of insufficient illumination, for example, indoors, it is advisable to use infrared sources, which allow obtaining images of a biological object with a clarity sufficient to extract its features. Moreover, unlike the visible lighting, the infrared sources are practically invisible to the human eye, which allows the system to operate without additional discomfort for its users, since no bright light is directed onto the biological object. Besides, this allows covert surveillance, if such a task is set for the system.
To obtain uniform illumination of different parts of the biological object body for the purpose of subsequent capturing and correct processing of the image by the camera, it is necessary to correct the polarization and scattering of infrared radiation. To achieve this aim, appropriate groups of polarizing and scattering means are used, as well as a secondary camera that shoots the same scene as the primary one but at an increased frame rate. The image obtained by the secondary camera is sent to the control subunit for analysis, where it is processed to measure the exposure of a certain part of the image. If excessively or insufficiently illuminated areas are detected in the image, the control subunit sends corresponding commands to the polarizing and scattering means to adjust the characteristics of the electromagnetic radiation emitted by the infrared LEDs. The design of the sets of polarizing and scattering means makes it possible to change the characteristics of the electromagnetic radiation coming from the infrared LEDs that illuminate a certain part of the scene. Thus, by continuously analyzing the scene being transmitted by the secondary camera, it is possible to provide the necessary illumination of all segments that comprise the whole scene.
To analyze the scene being monitored, appropriate software and hardware means are used, which allow selecting (in compliance with the specified algorithms) the image regions containing a person’s face and extracting features of preeminently the face of a biological object to define the feature vector (i.e., an identifier of a biological object, from whose face image the extracted features have been stacked in the specified final feature vector).
In the preferred embodiment of the invention, the biological object to be identified is a human being. However, this does not mean that the system cannot be used to identify other biological objects that have individual features. Thus, if the appropriate software is available (for example, a neural network trained to identify certain specific features) the system may be adapted to recognize animals, in particular, mammals, or other similar creatures. This allows the system to be used, for example, in animal husbandry. However, the recognition and identification of people is a priority task for the system.
To ensure that the system efficiently performs its function, the means for analyzing images received by the camera are in essence a hardware-software complex that includes a processor directly performing the necessary computational operations, a memory module for storing intermediate and final results, and software instructions that specify the calculation algorithms.
Since the image analysis means are configured to detect a person's face in the image, it helps to simplify the subsequent extraction of the biological object features. In the preferred embodiment, the feature vector is obtained based on the analysis of the features of the biological object face. It means that for proper face analysis and subsequent feature extraction, it is advisable to determine the face location within the frame. For this purpose, special software means are applied which allow moving a sliding window around the whole frame, forming the vector which is sent to the classifier input. As a result, a conclusion is made whether there is or there is no face of a biological object within the frame.
Since the infrared sources are grouped according to the lengths of the electromagnetic waves emitted by them, the system reliability has been increased, which reduces the number of false-positive recognitions in cases when the identified biological object is not a living person (for example, when photographs and videos of a living person, or various masks and other similar devices are used to circumvent the identification system). This technical solution offers the method of liveness detection (face anti-spoofing) based on the analysis of the real vascular pattern on the body of a live person being identified. The infrared sources used in this invention allow selecting the appropriate wavelengths, which may penetrate the subcutaneous tissues and which are absorbed by the blood. This helps to determine whether the identified object is alive or not. The authors experimentally found that infrared rays with the wavelengths in the 891-900 nm range satisfy the specified criteria. The group of LEDs described here is designed to be used as an aid for illuminating the scene foreground. Thus, it becomes possible to reduce false positives and to protect against attempts to deliberately deceive or circumvent the system. The infrared rays with the wavelengths in the 800- 890 nm range make it possible to obtain a more detailed image of a biological object face in order to extract distinctive values from signal. This group of infrared LEDs is designed to be used as an aid for preeminent illumination of the scene foreground. Additionally, the said group of infrared LEDs may be used to illuminate the scene background, and in particular to detect a biological object (preferably its face) in the far plane of the scene.
Additional means used to provide detection of a biological object in the illuminated scene make it possible to optimize the system operation since these means are activated only if there is a biological object. Additionally, the system power consumption is reduced. To detect appearance of a biological object within the camera’s field of view, motion detectors and/or a thermal camera may be used.
As already noted above, the operation of the polarizing and scattering means is regulated in compliance with the data obtained from the secondary camera. The key point is the proper illumination of the identified biological object so that it becomes possible to extract distinctive features characterizing the object from the image received from the primary camera. In this connection, the image exposure is measured to subsequently determine the necessary corrective actions required to change the physical characteristics of the emitted rays, in particular, their polarization and scattering. The exposure can be measured for a certain part of the image and can be determined by calculating a mathematical average or a median value, or by measuring matrix or spot exposure. The secondary camera sends low resolution images to the control subunit at an increased frame rate (at least two times compared with the primary camera). The control subunit measures the illumination level of each image region and, based on the data obtained, corrects the command pulses to the lighting means and the sets of polarizing and scattering filters, thus ensuring the correct exposure of the images captured by the primary camera. In essence, the secondary and primary cameras are technically identical systems, differing in the image resolution and the frame rate.
The system can be connected with an Access Control system (ACS) that provide admission to a certain territory on the basis of biometric authentication and biometric identification. Access control and management facilities are located at the checkpoints of enterprises or other institutions, or passport control cabins in airports, railway stations or bus terminals.
To simplify the interaction between the system and its user (a biological object being identified), the system is equipped with audio and/or visual aids that can be used to communicate the identification results or, additionally, the results of authentication. In another embodiment, these devices may also instruct to make additional movements for correct identification, for example, “come closer”, “stand in the center”, etc. Besides, the LED indicator can spotlight the area on the floor, in which a person must stand to pass the identification procedure.
It is known that any warm-blooded biological object is a source of thermal radiation that carries information about the temperature of this radiating or reflecting body. The system components perceive the specified information, and on its basis display the surface temperature of the biological object body. This function can be used to detect liveness of the object being identified or to allow or deny access to the territory, for example, if the body temperature of a biological object is above the normal values.
The deployment of the image analysis means on a remote server allows receiving and processing data obtained from several devices. This optimizes computing power compared with the situation when each system is equipped with such means separately. Additionally, this increases the computing power allocated to processing, since instead of many separate computing devices, it is possible to use one server, the power of which is much greater than that of each individual image analysis tool.
Since the illuminated scene is split into separate segments, the performance of the data processing system is increased and it becomes possible to adjust the characteristics of the infrared radiation illuminating in a particular area. The captured scene is subdivided vertically and horizontally into quadrants, each of which form separate frame threads (multiple images). Also, the scene is split in perspective projection, at least into far and near planes, depending on the size of the illuminated area. If the scene dimensions do not allow normal exposure of the captured image, i.e., if it is not enough to split the scene only into fore- and background planes, a certain number of intermediate planes can be additionally introduced. The main criterion is the correct exposure of the objects present at the point of capture, namely, biological objects and their physical features.
At the same time, when the image regions are split by depth (into the near or far planes), the foreground is illuminated by the first cluster of the first infrared emission sources , since, as it was mentioned above, the parameters of the first infrared emission are selected to obtain a proper image of blood vessels, in particular, on the face of a person. Upon command from the control subunit, the scattering means can, if necessary, change the characteristics of the infrared radiation in such a way that the LED emission angle increases with a corresponding decrease in the distance of propagation of the infrared radiation. Thus, the near (foreground) plane of the scene image is illuminated by the first infrared emission sources.
By analogy with remote deployment of image analysis means described above, calculations based on cloud technology are also possible. This increases the system speed and reduces the costs connected with manufacturing of its individual components. In other words, the remote module for image analysis simplifies the system production.
The second invention within the group is a method for biometric identification of biological objects, which includes a combination of the following operations: processing by the devices designed to analyze the scene images captured by the camera when the scene is illuminated by the infrared emission sources; detecting a biological object in the images being processed; extracting features characterizing the biological object; generating feature vectors (based on the extracted features); identifying a biological object in compliance with the obtained at least one feature vector (wherein, the scene being illuminated by the first cluster of the first infrared emission sources and the second cluster of the second infrared emission sources). The scene is split into image regions in a such way that each segment is illuminated by the first cluster of the first infrared emission sources and/or by the second cluster of the second infrared emission sources. This approach makes it possible to properly extract the features of a biological object which is detected in the scene illuminated by the infrared sources.
In the first embodiment of the invention, the image analysis means are used to verify that the biological object manifests vital signs, i.e. blood vessels are seen.
In the second embodiment of the invention, the image analysis means extract from the scene captured by the camera an image of the face, provided that the biological object is a living person.
In the third embodiment of the invention, the infrared radiation is controlled by sending corresponding signals to the polarizing and scattering means, with which the infrared sources are equipped. The signals are transmitted from the unit controlling the infrared sources in compliance with the results of the analysis of the images obtained from the secondary camera.
In the fourth embodiment of the invention, the captured scene illuminated by the infrared sources is split into at least fore- and background; and the images of different planes are divided into separate channels.
In the first and the fourth embodiment, the foreground is further subdivided into at least two channels. The first channel is formed by illuminating the foreground area of the scene with the first cluster of the first infrared emission sources, and the second channel is formed by illuminating the foreground area of the scene with the second cluster of the second infrared emission sources. The first channel is used to analyze the vital parameters of a biological object, while the second - to analyze the features of a biological object.
In the fifth embodiment of the invention, the biological object is authenticated based on the results of its identification.
In development of the fifth embodiment of the invention, an Access Control System (ACS) is further controlled and operated based on the results of the biological object authentication.
In addition to operations that directly ensure proper biometric identification (i.e., processing by the image analyses means of the images captured by the primary camera; detecting a biological object; extracting its features for pattern recognition; generating at least one feature vector) the key aspects of this method are: illumination of the scene by the first cluster of the first infrared emission sources and by the second cluster of the second infrared emission sources, splitting the scene into image planes, which are different in depth and location (foreground and background). Together these measures allow high-speed and accurate identification of a biological object, eliminating the possibility of falsifying the object's biometric data.
Detection of blood vessels in a biological object makes it possible to increase the reliability of the system by protecting against unintentional or intentional actions of third parties to compromise the system through presenting false materials for analysis.
Since the features that individualize a person mainly concentrate on the face, it is advisable to isolate its image for the subsequent extraction of these features.
The means designed for controlling the infrared sources allow sending command signals both directly to the infrared sources and to the polarizing and scattering means, thus providing an opportunity to independently adjust the illumination of the scene individual areas. This makes it possible to quickly adapt the backlight to changing lighting conditions affecting, among other things, the quality of the biological object image captured by the camera. In general, this increases the speed and accuracy of the system operation in terms of its ability to identify and recognize objects.
To optimize the method, as well as to increase the speed of the system operation, it is advisable to split the scene into the near and far planes (fore- and background). This allows a more accurate adjustment of the illuminated plane to changing conditions. An object may only be recognized and identified in the foreground, because the first cluster of the first infrared emission sources, which allows detecting blood vessels of a biological object, may function correctly at a relatively short distance from the camera lens. Besides, the camera sensor can provide the best conditions for transmission of the biometric features if a biological object is detected in the near (foreground) plane. At the same time, a person's face may be recognized even if the person is in the mid-ground or in the far background, since the resolution of the camera sensor is sufficient to get the image and the task of identification using the image analysis means is not set. Thus, the scene segmenting into background and foreground allows running parallel processes and analyzing several frame threads simultaneously: while a biological object is being recognized and identified in the foreground, the background equipment may be focused on isolating a face or faces of one or more biological objects from the frame thread. Thus, the speed of system work is significantly increased.
Based on the recognition result and the data obtained, the identified biological object can be authenticated. The authenticator is a compiled feature vector. The value of the obtained authenticator is compared with the information about previously saved authenticators contained, for example, in image analysis means. If the values match, a script predefined for this event can be run.
Based on the authentication results, a command may be generated and sent to control equipment providing or limiting access to the area under surveillance. This allows using the offered method for ACSs (access control systems), since it ensures high speed operation, identification accuracy and a high degree of protection against hacking.
This invention is illustrated by the following figures.
The figures label the following:
1 – Biometric identification system;
2 – Housing of the biometric identification system;
3 – First cluster of the first infrared emission sources;
4 – Second cluster of the second infrared emission sources;
5 – Group of polarizing means (set of polarizing filters);
6 – Group of scattering means (set of scattering electrochromic filters);
7 – Secondary camera;
8 – Primary camera;
9 – Light indicator;
10 – Sound indicator;
11 – Wireless communication unit;
12 – PoE power adapter (Power over Ethernet);
13 – Computation means (computation module);
14 – Subunit for illumination control;
15 – Peripheral controller.
A description of the drawings and functional diagrams illustrating the offered technical solution is given below to help understand the principles of operation and peculiarities of various embodiments of the invention, Although, the description text explains in detail the preferred embodiments of the technical solution, it should be understood that other embodiments are also possible. Accordingly, there is no need to limit the scope of legal protection covering this solution solely to the described embodiments and lists of the system parts and components. The group of inventions allows combining separate items belonging to different embodiments, including those, which are not specified in these claims. At the same time, it is necessary to clarify some terms used in the claims and description of the preferred technical solutions for a clear understanding of their basic principles by a specialist.
It should be noted that, unless the context otherwise clearly indicates, the names of the system components and parts used in the singular also include the plural and the plural includes the singular. For example, an indication of a composite element of the device also means the whole group of such elements.
Also, when describing the preferred embodiments, specific terms are used to ensure clarity of understanding. The terms used in the claim are intended to be understood in their ordinary and customary meaning, which is the meaning that a term would have to a person of ordinary skill in the art, and should include all technical equivalents used in the same manner and for the same purpose. Thus, in particular, under the term "biometrics", within the framework of this technical solution, the authors mean the totality of the unique physiological features of a biological object, which allows its identification. As a rule, a human face acts as a source of biometric data, namely the front part of the head, including the forehead, eyebrows, bridge of the nose, nose, eyes, cheeks, ears (in whole or in part), cheekbones, lips, mouth and chin. In addition, the location of blood vessels in the subcutaneous layer in a human face can serve as a source for biometric identification. The terms “identification” or “identifying” mean a series of actions performed by an information system or a software-hardware complex to determine the identifier of the person who has passed the identification procedure. Moreover, this identifier should uniquely define the object in the information system. At the same time, based on the identification results, the obtained identifier can be compared with the list of identifiers previously assigned by the information system or the software-hardware complex. The term “infrared source” means a device or substance capable of emitting energy into the surrounding space in the form of electromagnetic radiation in the infrared part of the electromagnetic spectrum with the wavelengths in the 0.75 - 1000 µm range. For example, an infrared LED may be treated as such a device. The term “scene” in this claim means an area in the surrounding space that is illuminated by the infrared sources and is captured by the camera's electromagnetic radiation detector. In other words, this is the spatial angle through which the camera detector is sensitive to electromagnetic radiation. In the context of this invention, the notion “camera” means an optical device capable of capturing the images of the surrounding space that fall into its field of view, i.e., a scene image. The term “image analysis means” shall be understood as complex of software and hardware means used to analyze the whole image or its individual parts captured by the camera with the purpose, for example, to detect a biological object at the point of capture and to further analyze the features of a biological object. The term “feature extraction” means preeminently the analysis of an image of the person’s face with the aim to create an identifier, based on the analysis results. It should be noted that the said identifier contains, probably in an encrypted form, the characteristics of the face of a biological object, in particular, the shape of the person’s face, nose, lips, eyes, etc. The notion “a biological object” means a living being, primarily, a human being. In this sense, a biological object is not a picture, a hologram, a face mask or other reproductions of living beings, since the identification task is primarily aimed at determining unique characteristics of biological objects. In relation to a biological object, the term “detected at the point of capture” means existing in a certain space, which is directly or through reflection illuminated by the infrared sources and which is captured by the camera. However, detection of a biological object, for example, behind an obstacle, more or less opaque to electromagnetic radiation in the visible and/or infrared range, does not create sufficient conditions for the functioning of the system and use of the method. On the contrary, the detection of a biological object wholly or partially in the image of the scene captured by the camera, shall be treated as “presence” in the true sense of this word within the framework of this invention. The term “control subunit” means an integral part of the whole biometric identification system, which sends commands in the form of electrical signals to the actuators (including polarizing and scattering means), while receiving and processing information coming from the secondary camera. The control subunit may consist of one or more electronic components, in particular, a processor, a programmable microcircuit or other similar units that are used to process data and to generate control commands. The term “a group of polarizing and scattering means” should be understood as a number of devices with which the clusters of the infrared sources are equipped and which are used to change the polarization and scattering angles of the rays passing through them. The term “the first cluster of the first infrared emission sources” means at least one LED emitting at least partially in the infrared region of the electromagnetic spectrum. The cluster may also consist of several infrared sources emitting in approximately the same infrared region of the electromagnetic spectrum. It is not necessary that all the infrared sources included in the first cluster of the first infrared emission sources emit at a wavelength exactly corresponding to the specified value, but the deviation may not exceed ±50 nm from the central emitted frequency. Similarly, the term “the second cluster of the second infrared emission sources” shall be understood and treated in the same manner. The differences in the characteristics of the first and the second infrared radiation wavelengths shall be not less than 100 nm. The term “image analysis” means at least partial image processing by appropriate soft- and hardware (including pre-processing of the image coming from the camera) to bring it into a form suitable for subsequent recognition and analysis: in particular, to reduce digital noise, to clean the noise in the image using the 2D Fourier transform, to correct over- and underexposure of the image by brightness normalization, to eliminate the deviation of a position at which a human face is detected using a face detection frame and to remove compression artifacts. This term also includes the recognition of the object required for the identification (for example, a face), its direct analysis with the extraction of features characterizing the face of a biological object, and the formation of an identifier associated with the said face. The phrase “commands are given” means a signal sent by the control subunit to the polarizing and scattering means to change the state of the said polarizing and scattering means. The said state, in its turn, determines the characteristics of the infrared rays emitted by the infrared sources when these rays pass through the polarizing and scattering means. Moreover, the signal may change either one or several characteristics of the infrared rays in terms of their polarization and/or scattering. It means that the signal may change the polarization and/or scattering of one part of the infrared rays emitted by the infrared source and illuminating a certain area (segment) of the scene. The term “computer-executable instructions” means a set of instructions for image processing and analysis in order to identify a biological object detected at the point of capture. The term “detection” (in particular, of a human face) shall be understood and treated as complete or partial presence a human face in the field of view of the camera on the scene illuminated by the infrared sources. The term “the cluster of the infrared sources” mean a number of independent infrared sources electrically connected and grouped together, for example, in several rows or columns (like a matrix structure), or in another geometric patterns: spirals, diagonals, etc. The word “detect” refers to the components of the system that make it possible to establish that a biological object, in particular, a person, is present, in whole or in part, in the area illuminated by the infrared sources in the camera's field of view. The term “measuring the image exposure” means a computer application used to determine the illuminance of an image or any part thereof. It should be noted that the exposure is primarily measured in the image region where the face of a biological object was previously detected for the subsequent extraction of its features. The term “change of polarization and scattering” means the transformation of the infrared radiation characteristics, in particular, the change in the vector direction of electromagnetic field oscillation in a perpendicular plane and/or the transformation of the angular distribution of the infrared flux. In the latter case, it is possible to change the infrared radiation characteristics only for the infrared sources illuminating a certain part of the scene. The term “control (or command) signal” shall be understood and interpreted as an electromagnetic pulse or a series of electromagnetic pulses that change the operation mode of a system component (in particular, polarizing or scattering means). “Access control and management means” include soft- and hardware that restrict access of people or other biological objects to the territory under supervision. The authorized persons may enter such territories through checkpoints, equipped with opening / closing facilities, allowing in various positions to prevent or open access for identified persons. The term “body temperature measuring” means determining the temperature of open surface areas of a human body using infrared rays emitted and captured by the corresponding components of the system. The term “remote means of storing and processing information” means soft- and hardware means, which analyze the images received from the camera and which are physically located outside the housing of a device or a system. Wired or wireless communication with these means can be provided over a local area network or over the Internet. The term “splitting the scene into segments” means subdividing the image captured by the camera into at least several sections located on planes with different depths in the physical space at which the camera is directed; and splitting each plane into sections along vertical and horizontal directions. The term “foreground” shall be understood and interpreted as part of the scene captured by the camera. This notion determines the distance (for example between 30 cm and 1 meter from the lens) at which a biological object should stand, depending on the conditional subdivision of the image space by depth. The said distance should be properly specified because it is required to adjust brightness of the LED backlight. The foreground distance is limited only by the minimum focusing distance of the camera lens. Accordingly, the “background” is also determined by its actual location in the scene, but at the same time the background distance from the lens is limited by the maximum brightness of the LEDs. The number of intermediate positions (mid-grounds) is selected based on the dynamic range of the camera used. Thus, it is possible to determine the distance from the device to the object, if the plane of its location is known. The object’s exposition will be correct only if it is located on a certain plane (or in the transition area). On all other planes, the object will be either underexposed or overexposed. In the context of this invention, the term “cloud computing” shall be understood and interpreted as the spatial separation of image analysis means from the rest of the system components so that communication with the said means is provided through a network protocol. In a physical sense, this means that the devices used to analyze images are located on a server that collects information from a variety of biometric identification systems. In more detail, admission to a limited access territory is provided by a variety of biometric identification systems installed at each checkpoint, depending on their number. The images from the cameras are sent to the server with the image analysis means, which may be physically located inside or outside the controlled territory. The term “a feature vector” means a feature description of an object, i.e., a vector composed of the values attributed to a certain set of features characterizing the images captured by the camera. In the context of this invention, the notion “vital signs” shall be understood as the characteristics of a living person, such as: the vein pattern on the face and the body temperature corresponding to the normal body temperature of a human being. The notion “frame thread” is understood as multiple shots of different planes of the same scene recorded by the camera in progressive mode or different planes illuminated by LEDs with different radiation frequencies. The term “authentication” means the procedure for verifying the person’s identity. In this case, the person’s biometric characteristics serve as an authentication factor.
The words “comprising”, “containing”, “including” mean that the named component, element, part or method step is present in the composition, object or method, but does not exclude the presence of other components, materials, parts, steps, even if such components, materials, parts, or method steps perform the same function as the named ones.
The materials, from which the various elements of this invention are made, are indicated below in the description of specific embodiments. The specified materials are typical, but not mandatory for use. They can be replaced by numerous analogues that perform the same function as the materials, the examples of which are given in the description below.
Examples
Below the LEDs, the secondary 7 and the primary 8 cameras are installed, one under the other. The cameras are located is such a manner that their angle of view, which determines the scene being captured, is almost identical. The invention suggests using Omnivision® OV5647, OV2311, or Sony® IMX219 cameras. To recognize objects in a thread, the Omnivision® OV2311 camera is used, as the most light-sensitive and capable of producing video at a high frame rate (120 or 240 frames per second) with a correct exposure. The Sony® IMX219 can capture ultra-high frame rate video (greater than 1000 fps). The camera can more accurately recognize emotions and the direction of a person's gaze. The cameras are equipped with infrared filters that cut off the visible light range. Besides, on the front surface along edge of the system housing 2, there is a light indicator 9, which allows feedback from the user, i.e. from a biological object being identified. shows that on the back side of the housing 2, there is a sound indicator 10, which works together with the light indicator, an antenna 11 of the Wi-Fi®/Bluetooth® wireless interface and an interface for wired connection to the Internet 12.
The device works as follows. The system 1, using a peripheral controller 15, made in the form of a motherboard, which incorporates the computing module 13, receives and processes information from the backlight control subunit 14. On the basis of the results obtained after processing by the computing module, the control subunit 14 adjusts the backlight parameters. The peripheral controller 15 contains: a microcontroller; a set of linear and switching regulators; power switching units of different devices; a dry contact input for peripheral equipment; an open-collector transistor circuit and a digital interface; a backlight controller with a distance sensor; a lighting module; and a primary camera. The distance sensor measures the distance to the first object in the sensor's field of view. The field of view of the sensor coincides with the viewing angle of the primary camera 8.
The peripheral controller 15 generates a control signal for the backlight control subunit 14, and turns on the scene illumination with two clusters of LEDs 3 and 4. The primary camera 8 captures the image of the scene illuminated by LEDs 3 and 4. The captured image is transmitted to the computing module 13, which performs its preliminary processing (radial distortion correction, exposure compensation, correction of the white and black levels). Noise is removed from the image by frequency processing. Then the resultant video file formed of the images obtained at the previous stages is encoded with H.264 compression. After the image is processed, the computing module 13 detects a person's face moving a 64 x 64 pixel sliding window around the area. The gradient of every pixel in the image is calculated combining the horizontal and vertical gradient approximations and using the classical combination of convolution kernels (-1,0,1) and [(-1,0,1)]^T. Movement from left to right produces a horizontal gradient, and movement from top to bottom produces a vertical gradient. After the gradients are obtained, the gradient magnitude and gradient angle are calculated for each of the 64 pixels. A histogram of magnitudes and angles which is essentially a vector of 9 bins is constructed with 20 degrees increments. The output is transmitted to a fully connected neural network for classification. The classifier output is a real number in the interval [0, 1], where 0 means “no face found”, and 1 means “face detected”. At the next stage, the detected faces are extracted for processing: the image is frame-by-frame cropped to the size of the face and normalized by the angle of rotation. Then the analysis means through the network with the CNNResNet34 architecture extract the features of the biological object.
The infrared rays provide a uniform illumination of a biological object to increase the accuracy of its recognition and identification. The system works as follows. The controller 15, sends appropriate command signals to matrix filters 5,6. This allows splitting the illuminated space into, at least, four areas both along the horizontal and vertical directions. Also, the image depth is subdivided into fore- and background by controlling the brightness of the backlight LEDs 3, 4. In some cases, with large sizes of the illuminated scene, intermediate shots can be added (mid-grounds). The maximum number of shots is determined by the number and maximum brightness of the backlight LEDs 3, 4. The maximum overall dimensions of the illuminated scene are limited by the scattering angle of the LEDs 3, 4, the viewing angles of camera lenses 7, 8 and their maximum resolution. The minimum and maximum distance to the object at the point of capture is limited by the focal length of a camera. The direction and degree of light scattering from the backlight LEDs 3, 4 is controlled by the matrix filters 5,6.
The foreground is illuminated with 940 nm LEDs 4 to reveal the pattern of blood vessels in a human body. Additionally, the foreground is illuminated by the LEDs 3 with a wavelength of 850 nm. This allows obtaining a detailed image of a person's face. All other planes, including the background, are illuminated by the LEDs 3 with a wavelength of 850 nm. The mid- and background planes serve to trace a person's face in the frame thread. The primary camera 8 used for face identification is close in parameters or identical to the secondary camera 7 used to determine the backlight brightness in different zones. The image transmitted for processing from the secondary camera 7 comes almost at double the frame rate and in a lower resolution than the images sent from the primary camera 8. The secondary camera 7 measures the level of illumination of each of the scene segment. Based on the received data, the controller 15 corrects the command pulse sent to the backlight LEDs 3, 4 and the matrix filters 5,6, achieving the correct exposure of the frame threads transmitted by the primary camera 8. The video from the primary camera 8 serves to obtain an image of a person's face. The video is divided into several frame threads, the number of which is determined by the number of planes and by the operating modes of the backlight LEDs. Each frame thread corresponds to a specific illuminated plane. In different frame threads, the foreground is illuminated by LEDs with wavelengths of 850 nm and 940 nm. The vascular pattern on a human body is detected to confirm that the object being identified is a living person.
The invention embodiments are not limited to the above specific examples. Other forms of implementation of the offered technical solution may be proposed without deviating from the scope of the invention. Unless the context otherwise clearly indicates, the configuration and relative position of the system main components and units may also vary. However, some configurations have been described herein.
The exemplary embodiments disclosed above are provided to show the industrial applicability of the apparatus and to give a general impression of the capabilities of the proposed system and method. The scope of legal protection of the offered solution is determined by the claims, and not by the description herein, and all changes made using equivalent features fall under the legal protection of this invention.
Claims (25)
- Biometric identification system containing the sources of infrared radiation adapted to illuminate the scene, a camera configured to capture images of the scene illuminated by the infrared sources and image analysis means, configured to extract features of a biological object detected in the scene illuminated by the infrared sources, characterized in that, it additionally contains means for control of infrared sources which include a subunit, controlling the cluster of polarizing and scattering means, with which the infrared sources are equipped, and a secondary camera configured to obtain images of a scene illuminated by the infrared sources; wherein the infrared sources are subdivided into the first cluster of the first infrared emission sources and the second cluster of the second infrared emission sources, and the control subunit being part of the infrared source control system is configured to analyze the images captured by the secondary camera; based on the results of such analysis, the control subunit is configured to send commands to the polarizing and scattering means.
- A system according to claim 1, characterized in that, the biological object is a human being.
- A system according to claim 1, characterized in that, the means for analyzing the image transmitted from the camera, contain a storage medium having computer executable instructions stored thereon, and a hardware processor for executing the computer executable instructions.
- A system according to claims 2, 3, characterized in that, the image analysis means are configured to detect a person's face in the images received from the camera).
- A system according to claim 1, characterized in that, the first cluster of the first infrared emission sources consists of at least one group of LEDs emitting in the 800-890 nm range.
- A system according to claim 1, characterized in that, the second cluster of the second infrared emission sources consists of at least one group of LEDs emitting in the 891-990 nm range.
- A system according to claim 1, characterized in that, it contains soft- and hardware designed to detect a biological object in the scene illuminated by the infrared sources.
- A system according to claim 1, characterized in that, the control subunit as part of the infrared source control system is configured to measure the exposure of scene images captured by a secondary camera.
- A system according to claim 1, characterized in that, the control subunit as part of the infrared source control system is configured with a possibility to polarize and scatter the infrared sources through a set of means for polarization and scattering, correspondingly, by the control signal sent to the specified means of polarization and scattering.
- A system according to claims 8, 9, characterized in that, the control subunit as part of the infrared source control system is configured with a possibility to change the polarization and scattering of a set of means for polarization and scattering based on the analysis of images captured by the secondary camera.
- A system according to claim 1, characterized in that, it is connected with access control and management equipment that is capable of functioning based on the data received by analysis of an image taken by the camera.
- A system according to claim 1, 2, characterized in that, it is equipped with audio and / or visual indicators configured to generate messages to inform the person undergoing the identification process about the identification results.
- A system according to claim 1, characterized in that, the image analysis means are configured to measure the body temperature of a biological object.
- A system according to claim 1, characterized in that, the information about the biological objects, who are undergoing and / or who have undergone a biometric identification procedure, is sent to remote data storage and processing medium.
- A system according to claim 1, characterized in that, it is capable of splitting the scene illuminated by the infrared sources into segments, in such a way that each segment is a part of the whole image, and the exposition of these segments allow detecting a biological object (if such object is present in the specified area) and capturing its images.
- A system according to claim 15, characterized in that, it is configured with a possibility to subdivide the scene at least into fore- and background; wherein the foreground is illuminated by the first cluster of the first infrared emission sources and the second cluster of the second infrared emission sources, while the background area is illuminated by the second cluster of the second infrared emission sources.
- A system according to claim 1, characterized in that, the image analysis means are based on cloud computing.
- A method for biometric identification of biological objects, which includes processing by image analysis means of the scene illuminated by the infrared sources and captured by the camera, detecting a biological object in the processed images, extracting features of a biological object that identify this object, forming a feature vector based on the extracted features and identifying the biological object based on the feature vector, wherein the scene is illuminated by the first cluster of the first infrared emission sources and the second cluster of the second infrared emission sources, the scene is divided into zones in such a way that each zone is illuminated by the first cluster of the first infrared emission sources and / or the second cluster of the second infrared emission sources so that it is possible to extract features of a biological object from the picture of the illuminated segment of the scene.
- A method according to claim 18, characterized in that, the biological object is checked for the presence of vital signs in such a way that the presence of blood vessels of the biological object is detected by image analysis means.
- A method according to claims 18, 19 characterized in that, the image analysis means extract the image of a biological object face from the picture captured by the camera, provided that the biological object is a human being.
- A method according to claim 18, characterized in that, based on the analysis of images received from the secondary camera, the control subunit sends command signals to polarizing and scattering means with which the infrared sources are equipped.
- A method according to claim 18, characterized in that, the whole scene illuminated by the infrared sources is split into at least fore- and background segments the images of which are captured by the camera; and the images of different planes are directed to separate channels.
- A method according to claim 19, 22, characterized in that, the foreground is subdivided into at least two data processing channels; wherein the first channel is formed by illumination of the foreground segment of the scene by the first cluster of the first infrared emission sources, while the second channel is formed by illumination of the foreground segment of the scene by the second cluster of the second infrared emission sources; wherein the first channel is used to analyze the vital signs of a biological object, and the second channel is used to analyze the features of the biological object.
- A method according to claim 18, characterized in that, based on the identification results, the authentication of a biological object is carried out.
- A method according to claim 24, characterized in that, based on the results of the authentication of a biological object, operation of an Access Control system is monitored and controlled.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
RU2022113688A RU2791821C1 (en) | 2022-05-23 | Biometric identification system and method for biometric identification | |
RU2022113688 | 2022-05-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023229498A1 true WO2023229498A1 (en) | 2023-11-30 |
Family
ID=88919657
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/RU2023/050112 WO2023229498A1 (en) | 2022-05-23 | 2023-05-11 | Biometric identification system and method for biometric identification |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023229498A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010099356A1 (en) * | 2009-02-26 | 2010-09-02 | Lumidigm, Inc. | Method and apparatus to combine biometric sensing and other functionality |
US20160012218A1 (en) * | 2013-10-08 | 2016-01-14 | Sri International | Validation of the right to access an object |
US20200320321A1 (en) * | 2017-12-14 | 2020-10-08 | Redrock Biometrics Inc | Device and method for touchless palm print acquisition |
EP3929861A1 (en) * | 2019-02-18 | 2021-12-29 | NEC Corporation | Image processing device, method, and system, and computer-readable medium |
RU2763756C1 (en) * | 2018-02-27 | 2022-01-10 | Конинклейке Филипс Н.В. | Obtaining images for use in determining one or more properties of the subject's skin |
-
2023
- 2023-05-11 WO PCT/RU2023/050112 patent/WO2023229498A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010099356A1 (en) * | 2009-02-26 | 2010-09-02 | Lumidigm, Inc. | Method and apparatus to combine biometric sensing and other functionality |
US20160012218A1 (en) * | 2013-10-08 | 2016-01-14 | Sri International | Validation of the right to access an object |
US20200320321A1 (en) * | 2017-12-14 | 2020-10-08 | Redrock Biometrics Inc | Device and method for touchless palm print acquisition |
RU2763756C1 (en) * | 2018-02-27 | 2022-01-10 | Конинклейке Филипс Н.В. | Obtaining images for use in determining one or more properties of the subject's skin |
EP3929861A1 (en) * | 2019-02-18 | 2021-12-29 | NEC Corporation | Image processing device, method, and system, and computer-readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6932150B2 (en) | Methods and devices for face detection / recognition systems | |
US7065232B2 (en) | Three-dimensional ear biometrics system and method | |
US7574021B2 (en) | Iris recognition for a secure facility | |
US9934436B2 (en) | System and method for 3D iris recognition | |
KR102317180B1 (en) | Apparatus and method of face recognition verifying liveness based on 3d depth information and ir information | |
CN104680128B (en) | Biological feature recognition method and system based on four-dimensional analysis | |
US11756338B2 (en) | Authentication device, authentication method, and recording medium | |
US11348370B2 (en) | Iris authentication device, iris authentication method, and recording medium | |
US20210256244A1 (en) | Method for authentication or identification of an individual | |
KR20210131891A (en) | Method for authentication or identification of an individual | |
JP2007011667A (en) | Iris authentication device and iris authentication method | |
CN112016525A (en) | Non-contact fingerprint acquisition method and device | |
KR20090086891A (en) | Face recognition system and method using the infrared rays | |
JP2009187130A (en) | Face authentication device | |
KR101919090B1 (en) | Apparatus and method of face recognition verifying liveness based on 3d depth information and ir information | |
WO2020065851A1 (en) | Iris recognition device, iris recognition method and storage medium | |
WO2023229498A1 (en) | Biometric identification system and method for biometric identification | |
RU2791821C1 (en) | Biometric identification system and method for biometric identification | |
US20230350996A1 (en) | Face biometric recognition with anti-spoofing | |
CN112232157A (en) | Fingerprint area detection method, device, equipment and storage medium | |
Bashir et al. | Video surveillance for biometrics: long-range multi-biometric system | |
CN112232152B (en) | Non-contact fingerprint identification method and device, terminal and storage medium | |
CN115968487A (en) | Anti-spoofing system | |
Venugopalan et al. | Unconstrained iris acquisition and recognition using cots ptz camera | |
KR20040006703A (en) | Iris recognition system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23812240 Country of ref document: EP Kind code of ref document: A1 |