CN212484401U - Image sensing device and electronic apparatus - Google Patents
Image sensing device and electronic apparatus Download PDFInfo
- Publication number
- CN212484401U CN212484401U CN202020785445.6U CN202020785445U CN212484401U CN 212484401 U CN212484401 U CN 212484401U CN 202020785445 U CN202020785445 U CN 202020785445U CN 212484401 U CN212484401 U CN 212484401U
- Authority
- CN
- China
- Prior art keywords
- infrared
- tof
- detected
- light
- floodlight
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Length Measuring Devices By Optical Means (AREA)
- Studio Devices (AREA)
Abstract
The application provides an image sensing device and electronic equipment, image sensing device includes infrared imaging module and TOF module. Infrared imaging module includes: the infrared transmitting component is used for transmitting infrared floodlight and projecting the infrared floodlight to an object to be detected; and the infrared camera is used for receiving the infrared floodlight returned after the infrared floodlight irradiates the object to be identified so as to obtain a two-dimensional image of the object to be identified. The TOF module includes: the TOF emission component is used for emitting infrared light pulses and projecting the infrared light pulses to an object to be detected; and the TOF receiving component is used for receiving the infrared light pulse returned from the object to be detected by the infrared light pulse transmitted by the TOF transmitting component so as to acquire the depth information of the object to be detected.
Description
Technical Field
The present application relates to the field of optical detection technology, and more particularly, to an image sensing device and an electronic apparatus.
Background
With the rapid development of 3D sensing technology, more and more electronic devices begin to use 3D sensing modules to sense 3D scene information. For example, 3D face recognition technology, 3D gesture recognition technology, AR/VR, 3D mapping, 3D ranging, unmanned technology, etc. are currently heat points of development by various major manufacturers. For example, a 3D face recognition technology is taken as an example, which is a biometric technology for performing identification based on facial features of a person.
The 3D face recognition technology based on the depth information can better ensure the safety of face recognition, but the electronic equipment adopting the technology needs to be provided with a depth camera and a 2D camera, and a corresponding light source and other auxiliary circuits, needs to occupy a certain space of the electronic equipment, and cannot meet the requirements of miniaturization and lightness and thinness of the electronic equipment.
SUMMERY OF THE UTILITY MODEL
The application provides an image sensing device and an electronic device, which can reduce the occupation of the space of the electronic device and have a longer sensing distance.
In a first aspect, there is provided an image sensing apparatus comprising: infrared imaging module and TOF module, wherein, infrared imaging module includes:
the infrared transmitting component is used for transmitting infrared floodlight and projecting the infrared floodlight to an object to be detected; and
the infrared camera is used for receiving the infrared floodlight returned after the infrared floodlight irradiates the object to be identified so as to obtain a two-dimensional image of the object to be identified;
the TOF module includes:
the TOF emission component is used for emitting infrared light pulses and projecting the infrared light pulses to an object to be detected; and
and the TOF receiving component is used for receiving the infrared light pulse returned from the object to be detected by the infrared light pulse transmitted by the TOF transmitting component so as to acquire the depth information of the object to be detected.
In some possible implementations, the TOF module is an I-TOF module, and the TOF receiving component obtains the depth information of the object to be measured according to a phase difference between the received infrared floodlight pulse and the infrared floodlight pulse emitted by the TOF emitting component; or the TOF module is a D-TOF module, and the D-TOF module obtains the depth information of the object to be detected according to the time difference between the received infrared floodlight pulse and the infrared floodlight pulse emitted by the TOF emission component.
In some possible implementations, the image sensing apparatus further includes:
and the processing unit is used for determining whether the object to be detected is an authorized user or not according to the two-dimensional image of the object to be detected and the depth information of the object to be detected.
In some possible implementations, the infrared emission component is an infrared floodlight for emitting infrared floodlight with uniform intensity distribution.
In some possible implementation manners, the infrared imaging module further includes a light source driving component, configured to drive the infrared emission component to emit the infrared floodlight, so that the infrared camera acquires the two-dimensional image of the object to be detected according to the infrared floodlight.
In some possible implementation manners, the TOF module further includes a driving component, configured to drive the TOF transmitting component to transmit the infrared floodlight pulse, so that the TOF receiving component acquires the depth information of the object to be detected according to the infrared floodlight pulse.
In some possible implementations, the drive assembly is disposed in the TOF receiving assembly.
In some possible implementations, the TOF transmitting assembly includes a light source for emitting infrared light pulses and a light uniformizing element for receiving the infrared light pulses from the light source and projecting the infrared light pulses uniformly outward.
In some possible implementations, the light homogenizing element is any one of a diffuser or a DOE or a micro-lens array, and the light source is an array light source.
In a second aspect, there is provided an electronic device comprising an image sensing apparatus as in the first aspect or any possible implementation thereof.
Based on above-mentioned technical scheme, what the image sensing device of this application adopted is the TOF module, and the sensing distance is farther, and the TOF module is more miniaturized. Accordingly, the electronic device using the image sensing apparatus has a better sensing effect, and the electronic device has more space to accommodate other components.
Drawings
Fig. 1 is a schematic diagram of an image sensing apparatus according to an embodiment of the present application.
Fig. 2 is a schematic diagram of an electronic device according to an embodiment of the application.
Fig. 3 is a schematic diagram of an image sensing device according to another embodiment of the present application.
Fig. 4 is a schematic diagram of an electronic device according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Further, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In the description of the present application, it is to be understood that the terms "first", "second" and "first" are used merely for descriptive purposes and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject technology can be practiced without one or more of the specific details, or with other structures, components, and so forth. In other instances, well-known structures or operations are not shown or described in detail to avoid obscuring the focus of the application.
Referring to fig. 1, fig. 1 shows a schematic structural diagram of an image sensing device 10 according to an embodiment of the present application. Alternatively, the image sensing apparatus 10 may be for mounting on an electronic device for three-dimensional (3D) information sensing. For example, but not limited to, the image sensing device 10 is used for face recognition, gesture or motion recognition, building recognition, scene recognition modeling, augmented reality AR/virtual reality VR, ranging, or 3D mapping, among others. Examples of the electronic device include, but are not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart wearable device, a smart door lock, a vehicle-mounted electronic device, a medical device, an aviation device, and other devices or apparatuses requiring a three-dimensional (3D) information sensing function.
Specifically, as shown in fig. 1, the image sensing apparatus 10 includes a Time Of Flight (TOF) module 11 and a camera 12. The TOF module 11 includes a TOF transmitting assembly 110 and a TOF receiving assembly 111. The TOF transmitting component 110 is used for transmitting floodlight 201 to the object to be detected, and the TOF receiving component 111 is used for receiving the floodlight 202 returned from the object to be detected so as to obtain depth information of the object to be detected. The camera 12 is configured to receive floodlight 203 returned from the object to be detected, so as to obtain two-dimensional image information of the object to be detected.
The TOF receiving assembly 111 and the camera 12 share the same TOF transmitting assembly 110. Therefore, a floodlight matched with the camera 12 and a driver for driving the floodlight to emit floodlight are saved, the product cost is reduced, and the size of the image sensing device 10 is miniaturized.
The TOF module 11 calculates depth information of the object to be measured based on the time-of-flight sensing principle. For example, the TOF module 11 is based on the Indirect Flight Of Time (I-TOF) sensing principle to perform depth information sensing, such TOF module is accordingly referred to as I-TOF module. The I-TOF module is used for obtaining the depth information of the object to be measured according to the phase difference between the emitted light beam and the received light beam. The time difference can be indirectly obtained from the phase difference, and therefore, the I-TOF module is named accordingly. Alternatively still, the TOF module 11 is adapted to perform depth information sensing based on Direct Flight Of Time (D-TOF) sensing principles, and such TOF module is accordingly referred to as D-TOF module. The D-TOF module is used for obtaining the depth information of the object to be measured according to the time difference between the emitted light beam and the received light beam.
In the present application, the TOF module 11 is preferably an I-TOF module.
In this embodiment, the TOF transmitting component 110 is configured to transmit an infrared flood pulse 201, and the infrared flood pulse 201 irradiates an object to be measured to generate a return light signal, or return infrared flood pulses, such as an infrared flood pulse 202 and an infrared flood pulse 203. The TOF receiving component 111 may receive the returned infrared floodlight pulse 202, or the TOF receiving component 111 may sense a light signal returned from the object to be detected, where the returned infrared floodlight pulse 202 carries depth information (or depth of field information) of the object to be detected, so as to implement a depth imaging function of the TOF module 11 on the object to be detected.
The camera 12 is used for collecting two-dimensional infrared images. Specifically, the camera 12 may collect infrared floodlight pulses 203 returned by the infrared floodlight pulses 201 from the object to be detected, and the infrared floodlight pulses 203 are used to acquire a two-dimensional infrared image of the object to be detected.
Therefore, in the embodiment of the present application, the imaging device for two-dimensional imaging and the imaging device for depth imaging may share one light source, so that there is no need to specially provide a corresponding light source for the imaging device for two-dimensional imaging, and there is no need to provide other auxiliary circuits corresponding to the light source, such as a driving circuit of the light source, thereby reducing the number of components of the image sensing device 10, reducing the cost of the product, and reducing the size of the image sensing device 10, so that the image sensing device 10 is more suitable for electronic equipment with requirements on size and space.
In the embodiment, since the camera 12 is the TOF transmitting component 110 which shares the TOF receiving component 111, the image sensing apparatus 10 of the present application can avoid the need of additionally providing a floodlight which is matched with the camera 12 and a driver which is used for driving the floodlight to emit floodlight.
However, alternatively or in addition, in some embodiments, the image sensing apparatus 10 is not limited to using infrared light for two-dimensional image and depth information acquisition, and may also use any other light beam with any wavelength band or suitable wavelength band as the excitation light beam for sensing. Accordingly, the TOF receiving component 111 and the camera 12 may be adjusted to be sensing devices capable of collecting light beams in other corresponding wave bands.
Optionally, the two-dimensional image and the depth information of the object to be measured in the embodiment of the present application may be used, for example, for 3D modeling, face recognition, or simultaneous localization and mapping (SLAM), AR/VR, unmanned aerial vehicle, and the like, which is not limited in the present application.
Since the TOF receiving component 111 and the camera 12 share the same light source for imaging, the properties, such as pulse width, duration, pulse shape, etc., of the infrared floodlight pulse emitted by the TOF emitting component 110 can be flexibly configured according to the imaging requirements of the TOF receiving component 111 and the camera 12, etc. In some implementations, the TOF transmitting component 110 can be configured to transmit optical signals suitable for both imaging by the TOF receiving component 111 and the camera 12 to allow for both depth imaging and two-dimensional imaging.
In the embodiment of the present application, the object to be measured includes, but is not limited to, any object such as a human face.
In the present embodiment, the TOF transmitting assembly 110 includes a light source 1101 and a light homogenizing element 1102. The Light source 1101 is configured to emit infrared Light pulses, and the Light source 1101 may be, for example, a Light Emitting Diode (LED), a Vertical Cavity Surface Emitting Laser (VCSEL), a Fabry Perot (FP), a Laser Diode (LD), a Distributed Feedback (DFB) Laser, an Electro-absorption Modulated Laser (EML), or a Light source array composed of a plurality of Light sources, which is not limited in this embodiment. The dodging element 1102 is configured to receive the infrared light pulse emitted from the light source 1101 and emit an infrared floodlight pulse 201 with a uniform intensity distribution to the outside. It is to be understood that the term "uniformly distributed" is used herein in a relative sense and is not necessarily uniform.
For example, the light source 1101 includes a plurality of light emitting points arranged at intervals, each light emitting point for emitting an infrared light pulse. The infrared light pulses emitted from the plurality of light emitting points pass through the light uniformizing element 1102 to become a uniform surface light source. The uniform area light source is also referred to as a flood light source, for example. Accordingly, when the light beam emitted by the light source 1101 is an infrared light pulse, the light beam exiting the light unifying element 1102 may be referred to as an infrared flood pulse.
Optionally, the plurality of light emitting points are arranged in an array. For example, the light source 1101 is a VCSEL array light source, and the light source divergence angle of each light emitting point is small. Further optionally, the plurality of light emitting points are point light sources.
Optionally, the infrared light pulse is a sine wave signal or a square wave signal.
Optionally, the intensity of the uniform area light source is variable due to the pulse light beam. Alternatively, the TOF module 11 is, for example, an I-TOF module. Accordingly, the light beam emitted by the light source 1101 includes a plurality of pulse signals, and the plurality of pulse signals are continuous. The plurality of pulse signals are, for example, periodic.
Optionally, the TOF receiving component 111 calculates the depth information of the object to be measured, for example, according to a phase difference between the infrared floodlight pulse 201 emitted by the TOF transmitting component 110 and the received infrared floodlight pulse 202.
Optionally, the TOF module 11 is, for example, a D-TOF module. The TOF transmitting component 110 of the D-TOF module is also used for emitting infrared floodlight pulses, and the TOF receiving component 111 can calculate the depth information of the object to be detected according to the time difference between the received infrared floodlight pulses and the infrared floodlight pulses emitted by the TOF transmitting component 110.
The D-TOF sensing module can directly obtain the time difference between the emitted light beam and the received return light beam, and the I-TOF module obtains the phase difference according to the emitted light beam and the received return light beam, and can also indirectly obtain the time difference according to the phase difference.
It should be noted that, regardless of the I-TOF module or the D-TOF module or the TOF module of another suitable type, the light beam emitted to the external space is a flood pulse light beam, which not only can be used for sensing the depth information of the object to be sensed, but also can be used for sensing the two-dimensional image information of the object to be sensed, so as to avoid the technical solutions of additional setting of the flood light working with the camera 12 and the driver of the flood light, etc., and all belong to the protection scope of the present application.
The camera 12 obtains two-dimensional image information of the object to be measured according to the received infrared floodlight pulse 203.
It should be understood that, in the embodiment of the present application, the light source 1101 may also emit light in other suitable wavelength bands, for example, ultraviolet light, visible light, and the like, that is, in the embodiment of the present application, optical signals in other wavelength bands may also be used for two-dimensional imaging or depth imaging, and imaging with optical signals in an infrared wavelength band is beneficial to reducing the dependence of the image sensing apparatus 10 on the working environment, and improving the robustness of 3D sensing performance, for example, face recognition performance.
Hereinafter, the light source 1101 emits an optical signal in an infrared band as an example, but the present application is not limited thereto.
It should be understood that the number of the light sources in the light source 1101 is configured according to actual needs, such as one or more light sources may be configured, or a light source array composed of a plurality of light sources may be provided, or alternatively, one light source includes a plurality of sub light sources arranged at intervals, for example, but not limited to, a point light source.
Optionally, the light source 1101 may be a separate module or may be integrated with other modules in the electronic device, for example, the light source 1101 may be a part of a proximity sensor of the electronic device.
The dodging element 1102 is adapted to condition the infrared light pulses from the light source 1101 into infrared flood pulses having a uniform intensity distribution.
As an example, the light uniformizing element 1102 is a diffuser or diffuser (diffuser) for diffusing or atomizing the infrared light pulses emitted from the light source 1101 and projecting them uniformly outward.
As another embodiment, the dodging element 1102 is a Diffractive Optical Element (DOE), and the DOE is configured to diffract and expand the incident infrared light pulse to form the infrared floodlight pulse, and emit the infrared floodlight pulse into the space of the object to be measured.
As another embodiment, the light unifying element 1102 may also include a microlens array, and the microlens array is composed of a plurality of microlens units. In some embodiments, the plurality of microlens elements are configured to receive the infrared light pulse from the light source 1101 and generate an array light beam (i.e., infrared floodlight pulse) corresponding to the arrangement of the plurality of microlens elements for emitting outwards. In other embodiments, the light source 1101 may also include a plurality of sub-light sources corresponding to the arrangement of the plurality of microlens units in the microlens array, and each microlens unit receives the infrared light pulse of the corresponding sub-light source and collimates or focuses the infrared light pulse to emit outward to form the infrared floodlight pulse.
In other embodiments, the light homogenizing element 1102 can also be a combination of at least two of a diffuser, a DOE, and a microlens array.
Optionally, in some embodiments, the TOF transmitting assembly 110 may further include a lens disposed between the light source 1101 and the light uniformizing element 1102, wherein the infrared light pulse emitted from the light source 1101 is collimated or converged by the lens and then enters the light uniformizing element 1102, and is further homogenized by the light uniformizing element 1102 to form the infrared floodlight pulse.
In the embodiment of the present application, the TOF receiving component 111 is any imaging device implemented based on the time-of-flight principle, such as a camera or a video camera. The TOF receiving component 111 is used for receiving an infrared floodlight pulse 202 returned from the object to be detected by an infrared floodlight pulse 201 emitted by the TOF emitting component 110. The time difference (i.e. the time of flight) between the emission time of infrared floodlight pulse 201 and the reception time of infrared floodlight pulse 202 or the phase difference between infrared floodlight pulse 201 and infrared floodlight pulse 202 is used to determine the depth information of the object to be measured.
In some embodiments, the TOF receiving component 111 includes an image sensor, which may include a pixel array composed of a plurality of pixel cells. The pixel cells are arranged to convert the received infrared flood pulses 202 into corresponding electrical signals. Alternatively, the pixel unit may be, for example and without limitation, a Charge-Coupled Device (CCD), a Complementary Metal Oxide Semiconductor (CMOS), an Avalanche Diode (AD), a Single Photon Avalanche Diode (SPAD), or other devices.
Optionally, the TOF receiving component 111 further includes a readout circuit composed of one or more of a synchronization control element, a signal amplifier, a time-to-digital converter (TDC), an analog-to-digital converter (ADC), and the like, which is not limited in this application. Further optionally, part or all of the readout circuitry is integrated into the image sensor. Alternatively, a part or all of the readout circuit is disposed outside the image sensor and connected to the image sensor.
The reading circuit is used for obtaining the depth information of the object to be measured according to the electric signal converted by the image sensor. Optionally, the readout circuit is configured to obtain depth information of the object to be detected according to a time difference or a phase difference between the transmitted infrared light floodlight pulse 201 and the received infrared floodlight pulse 202.
As an example, for example and without limitation, the TOF receiving component 111 includes a pixel array composed of 3 ten thousand pixel units, and depth information of 3 ten thousand points on the surface of the object to be measured can be detected. Based on the depth information of these 3 ten thousand points, for example, the facial information of the surface of the object to be measured can be roughly determined.
Optionally, the resolution of the camera 12 is higher than or even much higher than the resolution of the image sensor of the TOF receiving assembly 111.
In the embodiment of the present application, the Depth information of the object to be measured may also be referred to as a Depth Image (Depth Image) or a distance Image. And the pixel value in the depth image of the object to be measured represents the distance information between each point on the surface of the object to be measured and the same point or the same plane. In one possible implementation, the TOF receiving component 111 may acquire a depth image of the object to be measured, the pixel values in the depth image representing the distance of points on the surface of the object to be measured from the TOF receiving component 111. The pixel value change of the depth image corresponds to the depth change of the surface of the object to be measured, and reflects the geometric shape and the depth information of the visible surface of the object to be measured.
The camera 12 may be any device capable of acquiring 2D infrared images. In some embodiments, the camera 12 includes a filter and an infrared light image sensor. The filter is used for transmitting light with target wavelength and filtering light with non-target wavelength, and the infrared light sensor performs light detection based on the target wavelength and converts detected light signals into electric signals. Optionally, the infrared light image sensor is a CCD image sensor, or a CMOS image sensor. Optionally, the infrared light image sensor includes a plurality of pixel units, and one pixel unit is used for converting the received light signal to form one pixel value in one 2D infrared image. Alternatively, the pixel unit may employ a photodiode (photo diode), a Metal Oxide Semiconductor Field Effect Transistor (MOSFET) or other devices. Optionally, the pixel unit has higher light sensitivity and higher quantum efficiency for the light with the target wavelength, so as to detect the light with the corresponding wavelength.
In some embodiments, the target wavelength belongs to an infrared light band, for example, near infrared light with a target wavelength of 940nm, the filter is configured to transmit infrared light with a wavelength of 940nm, and block visible light with a wavelength other than 940nm and other infrared light from passing through, and the infrared light image sensor detects the infrared light with a wavelength of 940nm and forms a 2D infrared image of the object to be detected. Because the infrared light component of the 940nm wave band in the ambient light is less, the camera 12 images by using the infrared light with the wavelength of 940nm, so that the image sensing device 10 can work in different environments, and the interference of the ambient light to the imaging can be reduced.
In some embodiments, each pixel value in the two-dimensional image acquired by the camera 12 is represented as a gray value of the image, the appearance shape of the object to be measured can be represented by the gray value of the image, and further whether a human face exists in the two-dimensional image and face information of the surface of the human face can be determined according to the gray value of the image.
It should be understood that the camera 12 in the embodiment of the present application may be replaced by other cameras capable of two-dimensional imaging, such as an RGB camera, a grayscale camera, or a structured light camera. Correspondingly, the two-dimensional image may be an RGB image, a grayscale image, a structured light image, or the like. The camera 12 is used for imaging the infrared two-dimensional image, which is beneficial to ensuring that the image sensing device 10 can adapt to different environments, thereby improving the robustness of the face recognition performance.
For the sake of distinction and explanation, the imaging device for two-dimensional imaging is referred to as a first camera, and the imaging device for depth imaging is referred to as a second camera. In one specific example, the first camera may be the camera 12, and the second camera may be the TOF receiving assembly 111.
It should be understood that, in the embodiment of the present application, the wavelength band range of the light emitted by the light source 1101 may be determined according to the wavelength band used for imaging by the first camera for two-dimensional imaging and the second camera for depth imaging. For example, the wavelength range of the light emitted by the light source 1101 may be configured to at least include the wavelength range used by the first camera and the second camera for imaging. Optionally, when the first camera is implemented by using the camera 12, the wavelength range of the light emitted by the light source 1101 may be configured to include part or all of the infrared light wavelength band, or when the first camera is implemented by using an RGB camera, the wavelength range of the light emitted by the light source 1101 may be configured to include at least part or all of the visible light wavelength band.
Optionally, in this embodiment of the application, the wavelength bands used for imaging by the first camera and the second camera may be the same, for example, both the wavelength bands are infrared wavelength bands, or may also be different, for example, one is an infrared wavelength band, and the other is a visible wavelength band or an ultraviolet wavelength band, and the application is not limited thereto.
When the first camera and the second camera adopt the same imaging wave band, for example, both the first camera and the second camera are infrared wave bands. Taking the first camera as the camera 12 and the second camera as the TOF receiving component 111 as an example, the TOF receiving component 111 and the camera 12 can simultaneously or time-divisionally collect the infrared floodlight pulses 201 emitted by the TOF emitting component 110 from the infrared floodlight pulses returned by the object to be detected so as to respectively obtain the depth information and the two-dimensional image of the object to be detected. Optionally, when time-sharing acquisition is performed, the TOF receiving component 111 and the camera 12 do not limit the sequence of receiving the returned infrared floodlight pulses. For example, the TOF receiving component 111 may receive the infrared floodlight pulse returned from the object to be measured first, or the camera 12 may receive the infrared floodlight pulse returned from the object to be measured first.
When the first camera and the second camera are different in the wavelength band, the light source 1101 may emit light rays of different wavelength bands in a time-sharing manner. For example, the first camera adopts a first wavelength band for imaging, and the second camera adopts a second wavelength band for imaging. The first wavelength band may be, for example, an infrared wavelength band, and the second wavelength band may be, for example, a visible wavelength band, or the first wavelength band may be a visible wavelength band, and the second wavelength band may be an infrared wavelength band, or the first wavelength band and the second wavelength band may be a combination of any two other wavelength bands. The light source 1101 may emit the light of the first wavelength band and the light of the second wavelength band in sequence, and the emission order of the light of the two wavelength bands is not particularly limited. When the light source 1101 emits the light of the first waveband, the first camera may receive an infrared floodlight pulse returned from an object to be detected to obtain a two-dimensional image of the object to be detected, and when the light source 1101 emits the light of the second waveband, the second camera may receive an infrared floodlight pulse returned from the object to be detected to obtain depth information of the object to be detected.
In the following, the first camera and the second camera are described as an example of imaging with light rays in the same wavelength band, that is, the camera 12 and the TOF receiving module 111 are described as an example of imaging with light rays in the same wavelength band, specifically, the light rays in the same wavelength band are described as an infrared wavelength band, but the present application is not limited thereto.
As mentioned above, when the camera 12 and the TOF receiving component 111 both use the infrared light band for imaging, the light source 1101 can emit infrared light pulses, which are homogenized and adjusted by the light homogenizing element 1102 into infrared floodlight pulses, and further, the camera 12 and the TOF receiving component 111 can receive the infrared floodlight pulses returned from the object to be measured at the same time or at the same time.
In some embodiments, the camera 12 and the TOF receiving component 111 may sequentially receive infrared floodlight pulses returned from the object to be measured in a preset order. For example, the camera 12 receives the infrared floodlight pulse returned from the object to be measured, and then the TOF receiving component 111 receives the infrared floodlight pulse returned from the object to be measured; for another example, the TOF receiving module 111 receives the infrared floodlight pulse returned from the object to be measured, and then the camera 12 receives the infrared floodlight pulse returned from the object to be measured.
In other embodiments, the camera performing image acquisition later may perform image acquisition again when a specific condition is satisfied. When the specific condition is not satisfied, image capturing is not performed. For example, the camera 12 is a camera for performing image acquisition later, and the camera 12 may acquire the infrared floodlight pulse returned from the object to be measured when the depth information acquired by the TOF receiving component 111 satisfies a specific condition. For another example, the camera for performing image acquisition later is the TOF receiving component 111, and the TOF receiving component 111 may acquire the infrared floodlight pulse returned from the object to be measured when the two-dimensional image acquired by the camera 12 satisfies a specific condition.
Optionally, in some embodiments of the present application, the image sensing apparatus 10 further includes:
and the control unit is used for controlling the image acquisition sequence of the camera 12 and the TOF receiving component 111, namely the sequence of the returned infrared floodlight pulses received by the two cameras.
Alternatively, the control unit may be a control unit of the image sensing device 10, or may be a control unit in an electronic device including the image sensing device 10, which is not limited in the embodiment of the present application.
As an example, the control unit may control the camera 12 and the TOF receiving component 111 to sequentially receive the infrared floodlight pulses returned from the object to be measured according to the preset sequence described above.
As another example, the control unit may control one of the camera 12 and the Rx module 111 to receive the infrared floodlight pulse returned from the object to be measured first, and control the other camera to receive the infrared floodlight pulse returned from the object to be measured if a specific condition is satisfied.
As a specific example, the control unit may control the camera 12 to first receive an infrared floodlight pulse returned from the object to be detected to obtain a two-dimensional image of the object to be detected, and in a case that a face image exists in the two-dimensional image of the object to be detected, or in a case that the two-dimensional image of the object to be detected is a face image of an authorized user, control the TOF receiving component 111 to receive an infrared floodlight pulse returned from the object to be detected to obtain the depth information of the object to be detected.
As another specific example, the control unit is further configured to control the TOF receiving component 111 to first receive an infrared floodlight pulse returned from the object to be detected to obtain depth information of the object to be detected, and control the camera 12 to receive the infrared floodlight pulse returned from the object to be detected to obtain a two-dimensional image of the object to be detected when the depth information of the object to be detected conforms to a stereoscopic object feature or a human face feature, or when the depth information of the object to be detected is depth information of an authorized user.
Here, the two-dimensional image of the object to be measured has a face image, the two-dimensional image of the object to be measured is a face image of an authorized user, the depth information of the object to be measured conforms to the face feature or the stereo object feature, and the following description refers to a determination mode in which the depth information of the object to be measured is the depth information of the authorized user.
It should be understood that the specific conditions illustrated above are only examples, and in practical applications, the specific conditions may be adjusted according to a specific face recognition algorithm, and the application is not limited thereto.
Optionally, in some embodiments of the present application, the image sensing apparatus 10 further includes:
and the processing unit 13 is configured to perform an authentication recognition operation, such as but not limited to face recognition, gesture or motion recognition, building recognition, scene recognition modeling, and the like, according to the depth information of the object to be detected and the two-dimensional image of the object to be detected.
Optionally, the processing unit 13 may be a processing unit of the image sensing apparatus 10, or may also be a processing unit of an electronic device including the image sensing apparatus 10, for example, a main control module of the electronic device, which is not limited in this embodiment of the application.
Optionally, in some embodiments, the processing unit 13 and the aforementioned control unit are independent units or modules, or the processing unit 13 and the control unit may also be a same unit or module, which is not limited in this application.
It should be understood that, the specific algorithm for performing face recognition according to the depth information of the object to be detected and the two-dimensional image of the object to be detected is not limited in the present application, and may be flexibly adjusted according to actual requirements, or user settings, or factors such as security level.
Optionally, in some embodiments, the processing unit 13 is specifically configured to:
determining whether the two-dimensional image of the object to be detected is a face image of an authorized user;
and under the condition that the two-dimensional image of the object to be detected is the face image of the authorized user, determining whether the face recognition is successful according to the depth information of the object to be detected.
For example, the processing unit 13 may extract facial feature information in a human face from the two-dimensional image of the object to be detected, match the facial feature information with facial feature information of a two-dimensional image of an authorized user stored in advance, and if the matching is successful, the two-dimensional image of the object to be detected is a human face image. Here, the facial feature information may include, for example, but not limited to, any one or more of a nose, an eye, a mouth, an eyebrow, a forehead, a cheekbone, a chin, a face pompe, a width of a nose, a width of a chin, and/or distance information of any combination of a nose, an eye, a mouth, an eyebrow, a forehead, a cheekbone, and a chin.
Further, the processing unit 13 may also determine whether the face recognition is successful according to the typical position in the two-dimensional image of the object to be detected and the depth information of the object to be detected. For example, whether the face recognition is successful may be determined according to whether the depth information of the object to be detected conforms to the face feature or the stereo object feature. And when the depth information of the object to be detected accords with the characteristics of the human face or the characteristics of the three-dimensional object, determining that the human face recognition is successful, otherwise, determining that the human face recognition is failed. The depth information of the object to be measured conforms to the human face features, for example, but not limited to, the human face has a bulge, the eyes have a depression, the nose has a bulge, and the like.
Optionally, in other embodiments, the processing unit 13 may also determine whether a human face is detected or not according to the depth information of the object to be detected, or whether the depth information of the object to be detected conforms to human face features, and perform face recognition according to the two-dimensional image of the object to be detected to determine whether the face recognition is successful or not when the human face is detected or the depth information of the object to be detected conforms to the human face features.
In a specific implementation, a point set may be obtained in a three-dimensional coordinate system based on the depth information of the object to be detected, whether a curved surface formed by the point set conforms to a face feature is determined, and when the curved surface formed by the point set conforms to the face feature, it is determined that a face is detected. For example, whether a human face is detected may be determined according to characteristics such as whether the curved surface is overall convex, whether two concave portions exist on the curved surface, and whether a protrusion with an increasing height exists below the two concave portions. Optionally, when the curved surface is overall convex, two recesses exist on the curved surface, and a protrusion with an increasing height exists below the two recesses, it may be determined that the curved surface conforms to the face features, that is, the depth information of the object to be measured conforms to the face features.
When the depth information of the object to be detected conforms to the facial features, the processing unit 13 may match the two-dimensional image of the object to be detected with a pre-stored two-dimensional image of an authorized user, and determine whether the object to be detected is an authorized user. For example, the processing unit 13 may extract feature points of typical positions in the face, such as eyes, nose, mouth, and the like, from the two-dimensional image of the object to be detected, and match the feature points with reference feature points of a pre-stored two-dimensional image of an authorized user, and if the matching is successful, it is determined that the face recognition is successful, otherwise, it is determined that the face recognition is failed.
Optionally, in still another embodiment, the processing unit 13 may also determine whether a face is detected according to the two-dimensional image of the object to be detected, or whether the two-dimensional image of the object to be detected conforms to the face feature, and determine whether the face recognition is successful according to the depth information of the object to be detected when the face is detected or the two-dimensional image of the object to be detected conforms to the face feature.
Optionally, the processing unit 13 may extract feature information of typical positions of eyes, a nose, a mouth, and the like from the two-dimensional image of the object to be measured, and determine whether a human face exists in the two-dimensional image according to whether the feature information conforms to two-dimensional features of the organs. For example, if the features of the eyes conform to the preset two-dimensional features of the eyes, the two-dimensional features of the nose conform to the preset two-dimensional features of the nose, and the two-dimensional features of the mouth conform to the preset two-dimensional features of the mouth, it is determined that a human face exists in the two-dimensional image, otherwise, it is determined that the human face does not exist in the two-dimensional image, and further, the subsequent recognition operation is not performed, which is beneficial to reducing the waste of processing resources.
In the case that the face exists in the two-dimensional image, the processing unit 13 may further determine whether the object to be detected is an authorized user according to the depth information of the object to be detected, for example, match the depth information of the object to be detected with the depth information of a pre-stored authorized user, and determine whether the face recognition is successful. Or face recognition may be performed by combining the two-dimensional image of the object to be detected and the depth information of the object to be detected, for example, when the two-dimensional image of the object to be detected is matched with a pre-stored two-dimensional image of an authorized user, whether the object to be detected is a three-dimensional profile is determined by combining the depth image of the object to be detected, and when the object to be detected is the three-dimensional profile, face recognition is determined to be successful, which can eliminate interference of a planar face image and improve safety of face recognition.
In summary, the embodiment of the application combines the two-dimensional image of the object to be detected with the depth information of the object to be detected to perform face recognition, so that the two-dimensional image collected by the camera during face recognition is prevented from being from the authorized user but from the loopholes of planar images such as face photos of the authorized user, and the safety of face recognition can be improved.
It should be understood that the face recognition function of the embodiment of the application can be applied to various scenes such as mobile payment, screen unlocking of electronic equipment, entrance guard unlocking and the like, and the application is not limited thereto.
Optionally, in some embodiments of the present application, the TOF module 11 further includes a driving component, configured to drive the TOF transmitting component 110 to transmit the infrared floodlight pulse 201, so that the camera 12 and the TOF receiving component 111 can receive the infrared floodlight pulse.
Optionally, in some embodiments, the driving component, the control unit and the processing unit may be independent units, modules or components, or may also be the same unit, module or component, or some of the functions executed by the three may constitute one unit, and other functions constitute another unit, and the specific functional division manner is not specifically limited in this application.
Optionally, in a specific embodiment, the driving assembly is configured to be disposed in the TOF receiving assembly 111. In other words, the TOF receiving assembly 111 may serve as a driving assembly of the TOF transmitting assembly 110 for transmitting a driving signal to the TOF transmitting assembly 110 to drive the TOF transmitting assembly 110 to emit light for image acquisition.
In some embodiments, when the TOF receiving component 111 and the camera 12 receive the infrared light pulses transmitted by the TOF transmitting component 110 in a time-sharing manner, the TOF transmitting component 110 correspondingly transmits different numbers of pulses to the TOF receiving component 111 according to different situations, so that the TOF receiving component 111 obtains appropriate depth information. Wherein the different situations are, for example, different scenes (outdoor, indoor, etc.). For example, in the case of outdoor sunny conditions, in order to avoid interference from ambient light, the TOF transmitting component 110 transmits more pulses in the same time period than when indoors, so that the TOF receiving component 111 obtains better depth information.
However, since the number of infrared light pulses received by the camera 12 is different in different time periods or different scenes (outdoors, indoors, etc.), the brightness of images formed by the camera 12 in different time periods or different scenes is different, and thus, the image flicker problem occurs when the user views a real-time image on the display side.
To this end, based on the solutions disclosed in the above embodiments, the present application also proposes a modified embodiment, in which the TOF transmitting assembly 110 time-divisionally transmits constant-brightness floodlight and variable-brightness floodlight pulses. And the constant-brightness floodlight and the variable-brightness floodlight pulse are light beams with the same wavelength.
Optionally, the light source 1101 is configured to time-share light beams with constant intensity and light pulses with varying intensity. The dodging element is used for adjusting the light beam with constant intensity into floodlight with uniform light intensity distribution and adjusting the light beam with variable intensity into floodlight pulses with uniform light intensity distribution.
Optionally, the camera 12 and the TOF receiving component 111 work in a time-sharing manner. When the TOF transmitting component 110 transmits floodlight with constant brightness, the camera 12 receives the floodlight with constant brightness to obtain two-dimensional image information of the object to be detected; when the TOF transmitting component 110 transmits the floodlight pulse with the changed brightness, the TOF receiving component 111 receives the floodlight pulse with the changed brightness and converts the floodlight pulse into a corresponding electric signal so as to obtain the depth information of the object to be detected. The intensity-varying flood pulses emitted by the TOF emitting assembly 110 have, for example, a preset frequency.
Because the floodlight with constant brightness is received by the camera 12, the brightness of the two-dimensional image obtained by the camera 12 is stable, and the problem of image flicker can be solved.
Alternatively, for example and without limitation, the TOF emitting assembly 110 emits a constant intensity flood light for 1 second each time.
The main differences between this modified embodiment and the above-described embodiments are: for the technical solution of this modified embodiment, when the camera 12 collects a two-dimensional image of an object to be measured, the TOF transmitting component 110 transmits floodlight with constant brightness. The TOF transmitting assembly 110, the TOF receiving assembly 111, and the camera 12 in this modified embodiment have the same or similar structure as those in the above embodiments, and are not repeated herein.
As shown in fig. 2, an electronic device 20 is further provided in the embodiment of the present application, where the electronic device 20 includes an image sensing apparatus 21, and the image sensing apparatus 21 may be, for example, the image sensing apparatus 10 in the embodiment of the present application. The electronic device 20 includes, for example, but not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart wearable device, a smart door lock, a vehicle-mounted electronic device, a medical device, an aviation device, and other devices or apparatuses with TOF function requirements.
Optionally, the electronic device 20 may further include an earpiece, an ambient light/proximity sensor, and the like to achieve further functions. For example, in some embodiments, in consideration of the harmfulness of infrared light to human body, the proximity of a human face may be detected by a proximity sensor when the human face is too close, and the emission of the TOF emitting component in the image sensing apparatus 21 is turned off or the emission power is reduced when the human face is too close. In some embodiments, automatic communication may be realized by combining face recognition and an earpiece, for example, after the electronic device receives an incoming call, a TOF receiving component and an IR camera in the image sensing apparatus 21 required for face recognition may be started to respectively collect a depth image and an infrared image, and after face recognition is successful, a communication is connected and the earpiece and other devices are turned on to realize the communication.
Optionally, the electronic device 20 may further include an earpiece, an ambient light/proximity sensor, and the like to achieve further functions. For example, in some embodiments, in consideration of the harmfulness of infrared light to human body, the proximity of a human face may be detected by a proximity sensor when the human face is too close, and the emission of the TOF emitting component in the image sensing apparatus 21 is turned off or the emission power is reduced when the human face is too close. In some embodiments, automatic communication may be implemented by combining face recognition and an earphone, for example, after the electronic device receives an incoming call, the TOF receiving component and the IR camera in the image sensing apparatus 21 may be started to collect a depth image and an infrared image respectively, and after face recognition is successful, the communication is connected and the earphone and other devices are turned on to implement the communication.
Optionally, the electronic device 20 may further include a display screen, and the display screen may be used for displaying image content and for performing touch interaction. Such as for unlocking applications for face recognition. In one embodiment, when the electronic device is in a sleep state, a user picks up the electronic device, an inertial measurement unit in the electronic device recognizes that a screen is lighted up when acceleration caused by picking up is detected, an unlocking application program is started at the same time, an instruction to be unlocked appears on the screen, and at the moment, the electronic device opens a TOF receiving component and an IR camera in the image sensing device 21 to collect a depth image and an infrared image so as to perform face recognition.
Optionally, the electronic device 20 may further include a storage module for storing the two-dimensional image or the depth image of the authorized user, or may also store computer program instructions for executing the functions executed by the processing unit and the control unit in the foregoing embodiments.
Taking the electronic device 20 as a mobile phone as an example, the image sensing device 21 may be disposed on the top of the front surface (the side surface on the same side as the mobile phone screen) of the mobile phone, and in other embodiments, the image sensing device 21 may also be installed at another position where the user can conveniently perform face recognition, which is not limited to this application.
As shown in fig. 3, the present application also provides an image sensing apparatus 30 including: an infrared IR imaging module 31 and a time of flight TOF module 32, wherein the IR imaging module 31 comprises:
an infrared emitting assembly 310 for emitting infrared floodlight 301;
the IR camera 311 is used for receiving the infrared floodlight 302 returned after the infrared floodlight 301 irradiates the object to be detected so as to obtain a two-dimensional image of the object to be detected;
the TOF module 32 includes:
a TOF transmitting component 320 for transmitting the infrared light pulses 303;
and the TOF receiving component 321 is configured to receive the infrared light pulse 304 returned by the infrared light pulse 303 emitted by the TOF emitting component from the object to be measured, so as to obtain depth information of the object to be measured.
This application is through the IR imaging module 31 that will be used for gathering infrared two-dimensional image and the TOF module 32 integration that is used for gathering the depth image together, like this, can be based on the two-dimensional image of the object that awaits measuring with the depth information of the object that awaits measuring carries out face identification, can avoid the source of the two-dimensional image that the camera gathered when face identification not the authorized user, but the loophole of plane image such as authorized user's face photo, and then can improve face identification's security.
Here, the IR camera 311, the TOF transmitting component 320, and the TOF receiving component 321 correspond to the camera 12, the TOF transmitting component 110, and the TOF receiving component 111 in the foregoing, respectively, and the related implementation refers to the description of the foregoing embodiments, and is not repeated here.
Optionally, as shown in fig. 3, the infrared emitting assembly 310 may include an infrared light source 3101 and a light uniformizing element 3102, where the infrared light source 3101 is configured to emit infrared light, the light uniformizing element 3102 is configured to uniformize and adjust the infrared light into infrared floodlight, and a specific implementation of the light uniformizing element 3102 may refer to a specific implementation of the light uniformizing element 1102 in the foregoing embodiment, and is not described herein again.
Optionally, in some embodiments of the present application, as shown in fig. 3, the image sensing apparatus 30 further includes:
and the processing unit 33 is configured to determine whether the object to be detected is an authorized user according to the two-dimensional image of the object to be detected and the depth information of the object to be detected.
Here, for a specific implementation of the processing unit 33, reference may be made to related implementations of the processing unit 13 in the foregoing embodiments, and details are not described here for brevity.
Optionally, in some embodiments of the present application, the processing unit 33 is specifically configured to:
determining whether the two-dimensional image of the object to be detected is a face image of an authorized user;
and under the condition that the two-dimensional image of the object to be detected is the face image of the authorized user, determining whether the face recognition is successful according to the depth information of the object to be detected.
Optionally, in some embodiments of the present application, the processing unit 33 is specifically configured to:
and determining that the face recognition is successful under the condition that the depth information of the object to be detected conforms to the face characteristics.
Optionally, in some embodiments of the present application, the infrared IR imaging module further includes a light source driving component, configured to drive the infrared emitting component 310 to emit the infrared floodlight 301, so that the IR camera 311 can receive the infrared floodlight 302 returned from the object to be detected, so as to obtain a two-dimensional image of the object to be detected.
As shown in fig. 4, an electronic device 40 is further provided in the embodiment of the present application, where the electronic device 40 includes an image sensing device 41, and the image sensing device 41 may be the image sensing device 30 in the embodiment of the application, and for specific implementation, reference is made to the related description of the foregoing embodiment, and details are not repeated here. The electronic device 40 includes, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart wearable device, a smart door lock, a vehicle-mounted electronic device, a medical device, an aviation device, and other devices or apparatuses with TOF function requirements.
Optionally, the electronic device 40 may further include an earpiece, an ambient light/proximity sensor, and the like to achieve further functions. For example, in some embodiments, in consideration of the harmfulness of infrared light to a human body, the proximity of a human face may be detected by a proximity sensor when the human face is too close, and the emission of the TOF emitting component in the image sensing device 41 may be turned off or the emission power may be reduced when the human face is too close. In some embodiments, automatic communication may be realized by combining face recognition and an earphone, for example, after the electronic device receives an incoming call, a TOF receiving component and an IR camera in the image sensing device 41 required for face recognition may be started to respectively collect a depth image and an infrared image, and after face recognition is successful, the communication is connected and the earphone and other devices are turned on to realize communication.
Optionally, the electronic device 40 may further include an earpiece, an ambient light/proximity sensor, and the like to achieve further functions. For example, in some embodiments, in consideration of the harmfulness of infrared light to a human body, the proximity of a human face may be detected by a proximity sensor when the human face is too close, and the emission of the TOF emitting component in the image sensing device 41 may be turned off or the emission power may be reduced when the human face is too close. In some embodiments, automatic communication may be implemented by combining face recognition and an earphone, for example, after the electronic device receives an incoming call, the TOF receiving component and the IR camera in the image sensing device 41 may be started to collect a depth image and an infrared image respectively, and after face recognition is successful, the communication is connected and the earphone and other devices are turned on to implement the communication.
Optionally, the electronic device 40 may further include a display screen, and the display screen may be used for displaying image content and performing touch interaction. Such as for unlocking applications for face recognition. In one embodiment, when the electronic device is in a sleep state, a user picks up the electronic device, an inertial measurement unit in the electronic device recognizes that a screen is lighted up when acceleration caused by picking up is detected, an unlocking application program is started at the same time, an instruction to be unlocked appears on the screen, and at the moment, the electronic device opens a TOF receiving component and an IR camera in the image sensing device 41 to collect a depth image and an infrared image, so as to perform face recognition.
Taking the electronic device 40 as a mobile phone as an example, the image sensing device 41 may be disposed on the top of the front surface (the side surface on the same side as the mobile phone screen) of the mobile phone, and in other embodiments, the image sensing device 41 may also be installed at another position where the user can perform face recognition conveniently, which is not limited to this application.
Optionally, the electronic device 40 may further include a storage module for storing the two-dimensional image or the depth image of the authorized user, or may also store computer program instructions for executing the functions executed by the processing unit and the control unit in the foregoing embodiments.
The processing unit may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The memory module described above may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous link SDRAM (SLDRAM), and Direct Rambus RAM (DR RAM).
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The specific examples in the embodiments of the present application are only for helping those skilled in the art to better understand the embodiments of the present application, and do not limit the scope of the embodiments of the present application, and those skilled in the art may make various modifications and variations on the embodiments described above, and those modifications and variations fall within the scope of the present application.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. An image sensing apparatus, comprising: infrared imaging module and TOF module, wherein, infrared imaging module includes:
the infrared transmitting component is used for transmitting infrared floodlight and projecting the infrared floodlight to an object to be detected; and
the infrared camera is used for receiving the infrared floodlight returned after the infrared floodlight irradiates the object to be identified so as to obtain a two-dimensional image of the object to be identified;
the TOF module includes:
the TOF emission component is used for emitting infrared light pulses and projecting the infrared light pulses to an object to be detected; and
and the TOF receiving component is used for receiving the infrared light pulse returned from the object to be detected by the infrared light pulse transmitted by the TOF transmitting component so as to acquire the depth information of the object to be detected.
2. The image sensing device according to claim 1, wherein the TOF module is an I-TOF module, and the TOF receiving component obtains depth information of the object to be measured according to a phase difference between the received infrared floodlight pulse and the infrared floodlight pulse emitted by the TOF emitting component; or the TOF module is a D-TOF module, and the D-TOF module obtains the depth information of the object to be detected according to the time difference between the received infrared floodlight pulse and the infrared floodlight pulse emitted by the TOF emission component.
3. The image sensing device according to claim 1, further comprising:
and the processing unit is used for determining whether the object to be detected is an authorized user or not according to the two-dimensional image of the object to be detected and the depth information of the object to be detected.
4. The image sensing device of claim 1, wherein the infrared emitting assembly is an infrared flood lamp configured to emit infrared flood light having a uniform intensity distribution.
5. The image sensing device of claim 4, wherein the infrared imaging module further comprises a light source driving component for driving the infrared emission component to emit the infrared floodlight, so that the infrared camera acquires the two-dimensional image of the object to be measured according to the infrared floodlight.
6. The image sensing device of claim 1, wherein the TOF module further comprises a driving component for driving the TOF transmitting component to transmit the infrared floodlight pulse, so that the TOF receiving component obtains the depth information of the object to be measured according to the infrared floodlight pulse.
7. The image sensing device of claim 6, wherein the drive assembly is disposed in the TOF receive assembly.
8. The image sensing device of claim 1, wherein the TOF transmitting assembly includes a light source for emitting infrared light pulses and a light homogenizing element for receiving the infrared light pulses from the light source and projecting the infrared light pulses uniformly outward.
9. The image sensing device of claim 8, wherein the light homogenizing element is any one of a diffuser or a DOE or a microlens array, and the light source is an array light source.
10. An electronic device, comprising:
the image sensing device according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202020785445.6U CN212484401U (en) | 2020-05-12 | 2020-05-12 | Image sensing device and electronic apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202020785445.6U CN212484401U (en) | 2020-05-12 | 2020-05-12 | Image sensing device and electronic apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN212484401U true CN212484401U (en) | 2021-02-05 |
Family
ID=74460374
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202020785445.6U Active CN212484401U (en) | 2020-05-12 | 2020-05-12 | Image sensing device and electronic apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN212484401U (en) |
-
2020
- 2020-05-12 CN CN202020785445.6U patent/CN212484401U/en active Active
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11575843B2 (en) | Image sensor modules including primary high-resolution imagers and secondary imagers | |
EP3872658B1 (en) | Face recognition method and electronic device | |
CN109544618B (en) | Method for obtaining depth information and electronic equipment | |
US11335028B2 (en) | Control method based on facial image, related control device, terminal and computer device | |
EP2873986A1 (en) | Camera integrated with light source | |
TW201939942A (en) | Projector, detection method and detection device thereof, image capturing device, electronic device, and computer readable storage medium | |
US9091748B2 (en) | Methods and apparatus for 3D UV imaging | |
WO2020243969A1 (en) | Facial recognition apparatus and method, and electronic device | |
CN212160703U (en) | Image sensing device and electronic apparatus | |
CN104243800A (en) | Control device and storage medium | |
US20160292506A1 (en) | Cameras having an optical channel that includes spatially separated sensors for sensing different parts of the optical spectrum | |
CN111598073A (en) | Image sensing device and electronic apparatus | |
EP3701603B1 (en) | Vcsel based biometric identification device | |
CN110351543A (en) | The method and device of the infrared line projection's control of adaptability | |
US12013923B2 (en) | Electronic device and method for recognizing object | |
CN212160704U (en) | Image sensing device and electronic apparatus | |
CN212484402U (en) | Image sensing device and electronic apparatus | |
CN111598071A (en) | Image sensing device and electronic apparatus | |
CN111598072A (en) | Image sensing device and electronic apparatus | |
CN212484401U (en) | Image sensing device and electronic apparatus | |
CN213042294U (en) | Image sensing device and electronic apparatus | |
KR20210006605A (en) | Electronic device including sensor and method of operation thereof | |
CN113711229A (en) | Control method of electronic device, and computer-readable storage medium | |
CN103945093A (en) | Face recognition visible and near-infrared integrated photographic device based on ARM platform, and method | |
CN111626282A (en) | Image sensing device and electronic apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GR01 | Patent grant | ||
GR01 | Patent grant |