CN107633204B - Face occlusion detection method, apparatus and storage medium - Google Patents
Face occlusion detection method, apparatus and storage medium Download PDFInfo
- Publication number
- CN107633204B CN107633204B CN201710707944.6A CN201710707944A CN107633204B CN 107633204 B CN107633204 B CN 107633204B CN 201710707944 A CN201710707944 A CN 201710707944A CN 107633204 B CN107633204 B CN 107633204B
- Authority
- CN
- China
- Prior art keywords
- face
- lip
- image
- model
- eye
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 28
- 230000001815 facial effect Effects 0.000 claims abstract description 52
- 238000012935 Averaging Methods 0.000 claims abstract description 36
- 238000000034 method Methods 0.000 claims abstract description 27
- 210000001508 eye Anatomy 0.000 claims description 97
- 238000012549 training Methods 0.000 claims description 21
- 210000005252 bulbus oculi Anatomy 0.000 claims description 17
- 210000004279 orbit Anatomy 0.000 claims description 17
- 239000013598 vector Substances 0.000 claims description 12
- 239000000203 mixture Substances 0.000 claims description 3
- 238000000605 extraction Methods 0.000 description 9
- 230000000903 blocking effect Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000003760 hair shine Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of face occlusion detection methods, this method comprises: obtaining the realtime graphic of photographic device shooting, a real-time face image is extracted from the realtime graphic;By the real-time face image input facial averaging model, t face feature point is identified from the real-time face image;Ocular and lip-region are determined according to the location information of the t face feature point, the ocular and the lip-region are inputted to the lip disaggregated model of the eye disaggregated model of trained face, face in advance, judge the authenticity of the ocular and lip-region, and judges whether the face in the realtime graphic blocks according to judging result.The present invention can quickly judge whether the face in face image blocks.The invention also discloses a kind of electronic device and a kind of computer readable storage mediums.
Description
Technical field
The present invention relates to computer vision processing technology field more particularly to a kind of face occlusion detection method, apparatus and
Computer readable storage medium.
Background technique
Recognition of face is a kind of biological identification technology for carrying out authentication based on facial feature information of people.Pass through acquisition
Image or video flowing containing face, and detection and tracking face in the picture, so to the face detected carry out matching with
Identification.Currently, the application field of recognition of face is very extensive, played in various fields such as financial payment, access control and attendance, identifications
Very important effect brings convenience to people's lives.However, guarantee that face is most important there is no blocking, therefore
Need whether the face in detection image blocks before carrying out recognition of face.
In the industry common product judge face block be deep learning training by way of, judge face circumstance of occlusion, but
The judgment method is to sample size requirements height, and if predicting to block by the way of deep learning, and calculation amount is very big, speed ratio
It is relatively slow.
Summary of the invention
The present invention provides a kind of face occlusion detection method, apparatus and computer readable storage medium, main purpose exist
Face circumstance of occlusion in quickly detection real-time face image.
To achieve the above object, the present invention provides a kind of electronic device, which includes: memory, processor and camera shooting
Device includes face occlusion detection program in the memory, when the face occlusion detection program is executed by the processor
Realize following steps:
Image acquisition step: obtaining the realtime graphic of photographic device shooting, using face recognition algorithms from the realtime graphic
One real-time face image of middle extraction;
Feature point recognition step: by real-time face image input facial averaging model trained in advance, the face is utilized
Portion's averaging model identifies t face feature point from the real-time face image;And
Characteristic area judgment step: determining ocular and lip-region according to the location information of the t face feature point,
The ocular and the lip-region are inputted to the lip classification mould of the eye disaggregated model of trained face, face in advance
Type judges the authenticity of the ocular and lip-region, and judges that the face in the realtime graphic is according to judging result
It is no to block.
Optionally, when the face occlusion detection program is executed by the processor, following steps are also realized:
Judgment step: the lip disaggregated model of the eye disaggregated model, face that judge the face is to the ocular
And whether the judging result of lip-region is true.
Optionally, when the face occlusion detection program is executed by the processor, following steps are also realized:
When the eye disaggregated model of the face, face lip disaggregated model to the ocular and lip-region
When judging result is true, judge that the face in the real-time face image does not block;And
When the eye disaggregated model of the face, face lip disaggregated model to the ocular and lip-region
When judging result includes untrue, the face in the real-time face image is prompted to block.
In addition, to achieve the above object, the present invention also provides a kind of face occlusion detection methods, this method comprises:
Image acquisition step: obtaining the realtime graphic of photographic device shooting, using face recognition algorithms from the realtime graphic
One real-time face image of middle extraction;
Feature point recognition step: by real-time face image input facial averaging model trained in advance, the face is utilized
Portion's averaging model identifies t face feature point from the real-time face image;And
Characteristic area judgment step: determining ocular and lip-region according to the location information of the t face feature point,
The ocular and the lip-region are inputted to the lip classification mould of the eye disaggregated model of trained face, face in advance
Type judges the authenticity of the ocular and lip-region, and judges that the face in the realtime graphic is according to judging result
It is no to block.
Optionally, this method further include:
Judgment step: the lip disaggregated model of the eye disaggregated model, face that judge the face is to the ocular
And whether the judging result of lip-region is true.
Optionally, this method further include:
When the eye disaggregated model of the face, face lip disaggregated model to the ocular and lip-region
When judging result is true, judge that the face in the real-time face image does not block;And
When the eye disaggregated model of the face, face lip disaggregated model to the ocular and lip-region
When judging result includes untrue, the face in the real-time face image is prompted to block.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer readable storage medium
Include face occlusion detection program in storage medium, when the face occlusion detection program is executed by processor, realizes institute as above
The arbitrary steps in face occlusion detection method stated.
Face occlusion detection method, electronic device and computer readable storage medium proposed by the present invention, by will be real-time
Face image inputs facial averaging model, identifies the face feature point in the real-time face image, utilizes the eye portion of face
The lip disaggregated model of class model and face judges the authenticity of the ocular that face feature point determines and lip-region, and root
Judge whether the face in the real-time face image blocks according to the authenticity of ocular and lip-region.
Detailed description of the invention
Fig. 1 is the application environment schematic diagram of face occlusion detection method preferred embodiment of the present invention;
Fig. 2 is the functional block diagram of face occlusion detection program in Fig. 1;
Fig. 3 is the flow chart of face occlusion detection method preferred embodiment of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The present invention provides a kind of face occlusion detection method, is applied to electronic device 1.It is the present inventor shown in referring to Fig.1
The application environment schematic diagram of face occlusion detection method preferred embodiment.
In the present embodiment, electronic device 1 can be server, smart phone, tablet computer, portable computer, on table
Type computer etc. has the terminal device of calculation function.
The electronic device 1 includes: processor 12, memory 11, photographic device 13, network interface 14 and communication bus 15.
Wherein, photographic device 13 is installed on particular place, real-time to the target for entering the particular place such as office space, monitoring area
Shooting obtains realtime graphic, and the realtime graphic that shooting obtains is transmitted to processor 12 by network.Network interface 14 is optionally
It may include standard wireline interface and wireless interface (such as WI-FI interface).Communication bus 15 is for realizing between these components
Connection communication.
Memory 11 includes the readable storage medium storing program for executing of at least one type.The readable storage medium storing program for executing of at least one type
It can be the non-volatile memory medium of such as flash memory, hard disk, multimedia card, card-type memory.In some embodiments, described can
Reading storage medium can be the internal storage unit of the electronic device 1, such as the hard disk of the electronic device 1.In other realities
It applies in example, the readable storage medium storing program for executing is also possible to the external memory of the electronic device 1, such as on the electronic device 1
The plug-in type hard disk of outfit, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD)
Card, flash card (Flash Card) etc..
In the present embodiment, the readable storage medium storing program for executing of the memory 11 is installed on the electronic device commonly used in storage
1 face occlusion detection program 10, facial image sample database, human eye sample database, the lip sample database of people, building and trained
Facial averaging model, eye disaggregated model and lip disaggregated model of face of face feature point etc..The memory 11 may be used also
For temporarily storing the data that has exported or will export.
Processor 12 can be in some embodiments a central processing unit (Central Processing Unit,
CPU), microprocessor or other data processing chips, program code or processing data for being stored in run memory 11, example
Such as execute face occlusion detection program 10.
Fig. 1 illustrates only the electronic device 1 with component 11-15, it should be understood that being not required for implementing all show
Component out, the implementation that can be substituted is more or less component.
Optionally, which can also include user interface, and user interface may include input unit such as keyboard
(Keyboard), speech input device such as microphone (microphone) etc. has the equipment of speech identifying function, voice defeated
Device such as sound equipment, earphone etc. out, optionally user interface can also include standard wireline interface and wireless interface.
Optionally, which can also include display, and display appropriate can also be known as display screen or display
Unit.It can be light-emitting diode display, liquid crystal display, touch-control liquid crystal display and OLED in some embodiments
(Organic Light-Emitting Diode, Organic Light Emitting Diode) touches device etc..Display is for being shown in electronics dress
Set the information handled in 1 and for showing visual user interface.
Optionally, which further includes touch sensor.It is touched provided by the touch sensor for user
The region for touching operation is known as touch area.In addition, touch sensor described here can be resistive touch sensor, capacitor
Formula touch sensor etc..Moreover, the touch sensor not only includes the touch sensor of contact, proximity may also comprise
Touch sensor etc..In addition, the touch sensor can be single sensor, or such as multiple biographies of array arrangement
Sensor.
In addition, the area of the display of the electronic device 1 can be identical as the area of the touch sensor, it can also not
Together.Optionally, display and touch sensor stacking are arranged, to form touch display screen.The device is based on touching aobvious
Display screen detects the touch control operation of user's triggering.
Optionally, which can also include RF (Radio Frequency, radio frequency) circuit, sensor, audio
Circuit etc., details are not described herein.
In Installation practice shown in Fig. 1, as may include in a kind of memory 11 of computer storage medium behaviour
Make system and face occlusion detection program 10;Processor 12 executes the face occlusion detection program 10 stored in memory 11
Shi Shixian following steps:
The realtime graphic that photographic device 13 is shot is obtained, processor 12 is mentioned from the realtime graphic using face recognition algorithms
Real-time face image is taken out, facial averaging model, the eye disaggregated model of face and the lip of face are called from memory 11
The real-time face image input facial averaging model is identified the real-time face image septum reset feature by disaggregated model
The eye disaggregated model and lip classification mould of point, the ocular that the face feature point is determined and lip-region input face
Type judges whether the volume face in the real-time face image occurs by judging the authenticity of the ocular and lip-region
It blocks.
In other embodiments, face occlusion detection program 10 can also be divided into one or more module, and one
Or multiple modules are stored in memory 11, and are executed by processor 12, to complete the present invention.The so-called module of the present invention
It is the series of computation machine program instruction section for referring to complete specific function.
It is the functional block diagram of face occlusion detection program 10 in Fig. 1 referring to shown in Fig. 2.
The face occlusion detection program 10 can be divided into: obtain module 110, identification module 120, judgment module
130 and cue module 140.
Module 110 is obtained, it is real-time from this using face recognition algorithms for obtaining the realtime graphic of the shooting of photographic device 13
A real-time face image is extracted in image.When photographic device 13 takes a realtime graphic, photographic device 13 is by this reality
When image be sent to processor 12, after processor 12 receives the realtime graphic, the acquisition module 110 utilize recognition of face
Algorithm extracts real-time face image.
Specifically, the face recognition algorithms that real-time face image is extracted from the realtime graphic can be for based on geometrical characteristic
Method, Local Features Analysis method, eigenface method, the method based on elastic model, neural network method, etc..
Identification module 120, for utilizing the face for real-time face image input facial averaging model trained in advance
Portion's averaging model identifies t face feature point from the real-time face image.Assuming that t=34, in facial averaging model 27
In a face feature point, there are 12 eye socket characteristic points, 2 eyeball characteristic points, 20 lip characteristic points.It is mentioned when obtaining module 110
After taking out real-time face image, the identification module 120 calls the face of trained face feature point flat from memory 11
After equal model, real-time face image is aligned with the face averaging model, it is then real-time at this using feature extraction algorithm
It is searched in face image and the matched 12 eye socket characteristic points of 27 face feature points of the face averaging model, 2 eyeball spies
Levy point, 20 lip characteristic points.Wherein, the facial averaging model of the face feature point is preparatory building and trained, tool
Body embodiment will be illustrated in following face occlusion detection methods.
In the present embodiment, the feature extraction algorithm is SIFT (scale-invariant feature
Transform) algorithm.The part that SIFT algorithm extracts each face feature point after the facial averaging model of face feature point is special
Sign, such as 12 eye socket characteristic points, 2 eyeball characteristic points, 20 lip characteristic points, select an eye feature point or lip feature
Point is fixed reference feature point, and the same or similar feature of local feature with the fixed reference feature point is searched in real-time face image
Point, for example, within a preset range whether the difference of the local feature of two characteristic points, if so, showing this feature point and reference
The local feature of characteristic point is same or similar, and as an eye feature point or lip characteristic point.According to this principle until
All face feature points are found out in real-time face image.In other embodiments, this feature extraction algorithm can also be
SURF (Speeded Up Robust Features) algorithm, LBP (Local Binary Patterns) algorithm, HOG
(Histogram of Oriented Gridients) algorithm etc..
Judgment module 130, for determining ocular and lip-region according to the location information of the t face feature point,
The ocular and the lip-region are inputted to the lip classification mould of the eye disaggregated model of trained face, face in advance
Type judges the authenticity of the ocular and lip-region, and judges that the face in the realtime graphic is according to judging result
It is no to block.When the identification module 120 recognizes 12 eye socket characteristic points, 2 eyeball features from real-time face image
After point, 20 lip characteristic points, an ocular, root can be determined according to 12 eye socket characteristic points, 2 eyeball characteristic points
A lip-region is determined according to 20 lip characteristic points, it is then that determining ocular and lip-region input is trained
The eye disaggregated model of face and the lip disaggregated model of face, the eye area of the determination is judged according to the resulting result of model
The authenticity of domain and lip-region, that is to say, that the result of model output may both be all false, it is also possible to it is all true,
It may both include true, also include false.When the result of the lip disaggregated model output of the eye disaggregated model and face of face
In be false, then it represents that the ocular and lip-region are not the ocular of people and the lip-region of people;Work as face
Eye disaggregated model and face lip disaggregated model output result in be true, then it represents that the ocular and lip
Portion region is the ocular of people and the lip-region of people.Wherein, the eye disaggregated model of the face and the lip portion of face
Class model is preparatory building and trained, and specific embodiment will be illustrated in following face occlusion detection methods.
Specifically, the judgment module 130 is also used to judge the eye disaggregated model of the face, the lip portion of face
Whether class model is true to the judging result of the ocular and lip-region.When the eye classification mould of the face
It whether only include true in judging result after type, the lip disaggregated model output result of face.
Judgment module 130 is also used to the lip disaggregated model of the eye disaggregated model when the face, face to the eye
When the judging result of portion region and lip-region is true, judge that the face in the real-time face image does not block.?
That is being the ocular of people or the lip area of people when the ocular and lip-region that are determined according to face feature point
Domain, then it is assumed that there is no blocking for the face in the real-time face image.
Cue module 140, for when the lip disaggregated model of the eye disaggregated model of the face, face is to the eye
When the judging result of region and lip-region includes untrue, the face in the real-time face image is prompted to block.Work as root
In the ocular and lip-region determined according to face feature point, there are ocular or people that any one region is not people
Lip-region, then it is assumed that the face in the real-time face image blocks, and cue module 140 prompts in the real-time face image
Face block.
Further, when the eye disaggregated model output result of the face is false, then it is assumed that the eye area in image
Domain is blocked, when the lip disaggregated model output result of the face is false, then it is assumed that the lip-region in image occurs
It blocks, and makes corresponding prompt.
In other embodiments, if detection perfect person's face also carries out subsequent recognition of face after whether blocking, then for reality
When face image in face when blocking, cue module 140 is also used to that the face in current face image is prompted to hide
Gear obtains module and reacquires the realtime graphic that photographic device takes, and carries out subsequent step.
The electronic device 1 that the present embodiment proposes extracts real-time face image from realtime graphic, utilizes facial averaging model
It identifies the face feature point in the real-time face image, utilizes the eye disaggregated model of face and the lip disaggregated model of face
The ocular and lip-region determine to face feature point is analyzed, according to ocular and the authenticity of lip-region,
Quickly judge whether face blocks in present image.
In addition, the present invention also provides a kind of face occlusion detection methods.Referring to shown in Fig. 3, inspection is blocked for face of the present invention
The flow chart of survey method first embodiment.This method can be executed by a device, which can be by software and/or hardware reality
It is existing.
In the present embodiment, face occlusion detection method includes:
Step S10 is obtained the realtime graphic of photographic device shooting, is extracted from the realtime graphic using face recognition algorithms
One real-time face image.When photographic device takes a realtime graphic, photographic device sends this realtime graphic everywhere
Reason device extracts real-time face image using face recognition algorithms after processor receives the realtime graphic.
Specifically, the face recognition algorithms that real-time face image is extracted from the realtime graphic can be for based on geometrical characteristic
Method, Local Features Analysis method, eigenface method, the method based on elastic model, neural network method, etc..
Real-time face image input facial averaging model trained in advance is averaged mould by step S20 using the face
Type identifies t face feature point from the real-time face image.
The first sample library for there are n facial images is established, t face feature point is marked in every facial image,
The t face feature point includes: the t for representing eye locations1A eye socket characteristic point, t2A eyeball characteristic point and represent lip position
The t set3A lip characteristic point.In every facial image in first sample library, hand labeled t1A eye socket characteristic point, t2It is a
Eyeball characteristic point and t3A lip characteristic point, in every facial image should (t1+t2+t3) a feature point group is at a shape spy
Vector S is levied, n shape eigenvectors S of face is obtained.
Face characteristic identification model is trained to obtain facial averaging model using the t face feature point.It is described
Face characteristic identification model is Ensemble of Regression Tress (abbreviation ERT) algorithm.ERT algorithm is formulated
It is as follows:
Wherein t indicates cascade serial number, τt() indicates the recurrence device for working as prime.Each recurrence device is returned by many
(tree) composition is set, trained purpose is exactly to obtain these regression trees.
WhereinEstimate for the shape of "current" model;Each recurrence device τt() according to input picture I andTo predict
One incrementThis increment is added in current shape estimation and improves "current" model.Wherein every level-one returns
Device is predicted according to characteristic point.Training dataset are as follows: (I1, S1) ..., (In, Sn) wherein I is the sample inputted
Image, S be feature point group in sample image at shape eigenvectors.
During model training, the quantity of facial image is t in first sample library, it is assumed that each samples pictures have
34 characteristic points, feature vectorx1~x12Indicate that eye socket is special
Levy the abscissa of point, x13~x14Indicate the abscissa of eyeball characteristic point, x15~x34Indicate the abscissa of lip characteristic point.Take institute
There is Partial Feature point (such as taking 25 characteristic points at random in 34 characteristic points of each samples pictures) formation of samples pictures
Described eigenvector S train first regression tree, by the true of the predicted value of first regression tree and the Partial Feature point
The residual error of real value (weighted average for 25 characteristic points that each samples pictures are taken) is used to train second tree ... successively class
It pushes away, the residual error until training the predicted value of the N tree and the true value of the Partial Feature point obtains ERT algorithm close to 0
All regression trees, obtained facial averaging model (mean shape) according to these regression trees, and by model file and sample database
It saves into memory.Because of the sample labeling of training pattern 12 eye socket characteristic points, 2 eyeball characteristic points and 20 lips
Characteristic point, the then facial averaging model for the face that training obtains can be used for identifying 12 eye socket characteristic points, 2 from facial image
Eyeball characteristic point and 20 lip characteristic points.
After getting real-time face image, trained facial averaging model is called from memory, by real-time face figure
As being aligned with facial averaging model, search in the real-time face image using feature extraction algorithm and the face is averaged mould
12 eye socket characteristic points of type, 12 eye socket characteristic points of 2 eyeball characteristic points and 20 lip Feature Points Matchings, 2 eyeballs
Characteristic point and 20 lip characteristic points.Wherein, which is evenly distributed on lip.
Step S30 determines ocular and lip-region according to the location information of the t face feature point, by the eye
Region and the lip-region input the lip disaggregated model of the eye disaggregated model of trained face, face in advance, judge institute
The authenticity of ocular and lip-region is stated, and judges whether the face in the realtime graphic hides according to judging result
Gear.
The human eye positive sample image of the first quantity and the human eye negative sample image of the second quantity are collected, is extracting every human eye just
The local feature of sample image, human eye negative sample image.Human eye positive sample image refers to the eye sample comprising human eye, Ke Yicong
Eyes part is plucked out in facial image sample database as eye sample, the negative eye sample image of human eye refers to eye areas incompleteness
Image, multiple human eye positive sample images and negative sample image form the second sample database.
The lip positive sample image of third quantity and the lip negative sample image of the 4th quantity are collected, is extracting every lip just
The local feature of sample image, lip negative sample image.Lip positive sample image refers to the image of the lip comprising the mankind, can be with
Lip portion is plucked out from facial image sample database as lip positive sample image.Lip negative sample image refers to the lip area of people
Domain is incomplete or image in lip be not the mankind (such as animal) lip image, multiple lip positive sample images and negative
Sample image forms third sample database.
Specifically, the local feature is histograms of oriented gradients (Histogram of Oriented Gradient, letter
Claim HOG) feature, it is extracted and is obtained from human eye sample image and lip sample image by feature extraction algorithm.Due to sample graph
Colouring information effect less, is usually translated into grayscale image, and whole image is normalized as in, calculates the horizontal seat of image
The gradient of mark and ordinate direction, and calculates the gradient direction value of each location of pixels accordingly, to capture profile, the shadow and some
Texture information, and the influence that further weakened light shines.Then whole image is divided into Cell cell (8*8 picture one by one
Element), gradient orientation histogram is constructed for each Cell cell, to count local image gradient information and be quantified, is obtained
The feature description vectors of local image region.Then cell cell is combined into big block (block), is shone due to local light
Variation and the variation of foreground-background contrast, so that the variation range of gradient intensity is very big, this is just needed to gradient intensity
It normalizes, further illumination, shade and edge is compressed.Finally the HOG descriptor combinations for owning " block " are existed
Together, final HOG feature description vectors are formed.
With the HOG feature of positive and negative samples image and extraction in above-mentioned second sample database and third sample database to support to
Amount machine classifier (Support Vector Machine, SVM) is trained, and obtains eye disaggregated model and the people of the face
The lip disaggregated model of face.
When recognizing 12 eye socket characteristic points, 2 eyeball characteristic points, 20 lip characteristic points from real-time face image
Afterwards, an ocular can be determined according to 12 eye socket characteristic points, 2 eyeball characteristic points, according to 20 lip features
Point determines a lip-region, then determining ocular and lip-region are inputted to the eye classification mould of trained face
The lip disaggregated model of type and face, according to the resulting result of model judge the determination ocular and lip-region it is true
Reality, that is to say, that the result of model output may both be all false, it is also possible to be all true, it is also possible to it both include true,
It also include false.When being false in the result of the lip disaggregated model output of the eye disaggregated model and face of face, then
Indicating the ocular and lip-region not is the ocular of people and the lip-region of people;When the eye disaggregated model of face
It and in the result of the lip disaggregated model output of face is true, then it represents that the ocular and lip-region are the eyes of people
The lip-region in portion region and people.
Step S40, judge the lip disaggregated model of the eye disaggregated model of the face, face to the ocular and
Whether the judging result of lip-region is true.When eye disaggregated model, the lip disaggregated model of face of the face are defeated
It whether only include true in judging result out after result.
Step S50, when the eye disaggregated model of the face, face lip disaggregated model to the ocular and lip
When the judging result in portion region is true, judge that the face in the real-time face image does not block.That is, working as root
The ocular and lip-region determined according to face feature point, is the ocular of people or the lip-region of people, then it is assumed that should
There is no blocking for face in real-time face image.
Step S60, when the eye disaggregated model of the face, face lip disaggregated model to the ocular and lip
When the judging result in portion region includes untrue, the face in the real-time face image is prompted to block.When according to facial special
It is not the ocular of people or the lip area of people there are any one region in sign point determining ocular and lip-region
Domain, then it is assumed that the face in the real-time face image blocks, and the face in the real-time face image is prompted to block.
Further, when the eye disaggregated model output result of the face is false, then it is assumed that the eye area in image
Domain is blocked, when the lip disaggregated model output result of the face is false, then it is assumed that the lip-region in image occurs
It blocks, and makes corresponding prompt.
In other embodiments, if detection perfect person's face also carries out subsequent recognition of face after whether blocking, then for reality
When face image in face when blocking, step S50 further include:
The face in current face image is prompted to block, acquisition module reacquisition photographic device takes real-time
Image, and carry out subsequent step.
The face occlusion detection method that the present embodiment proposes, identifies the reality using the facial averaging model of face feature point
When face image in key facial feature point, using the eye disaggregated model of face and the lip disaggregated model of face to characteristic point
Determining ocular and lip-region is analyzed, and judges current figure according to the authenticity of the ocular and lip-region
Whether the face as in blocks, and quickly detects the circumstance of occlusion of face in real-time face image.
In addition, the embodiment of the present invention also proposes a kind of computer readable storage medium, the computer readable storage medium
In include face occlusion detection program, following operation is realized when the face occlusion detection program is executed by processor:
Image acquisition step: obtaining the realtime graphic of photographic device shooting, using face recognition algorithms from the realtime graphic
One real-time face image of middle extraction;
Feature point recognition step: by real-time face image input facial averaging model trained in advance, the face is utilized
Portion's averaging model identifies t face feature point from the real-time face image;And
Characteristic area judgment step: determining ocular and lip-region according to the location information of the t face feature point,
The ocular and the lip-region are inputted to the lip classification mould of the eye disaggregated model of trained face, face in advance
Type judges the authenticity of the ocular and lip-region, and judges that the face in the realtime graphic is according to judging result
It is no to block.
Optionally, when the face occlusion detection program is executed by processor, following operation is also realized:
Judgment step: the lip disaggregated model of the eye disaggregated model, face that judge the face is to the ocular
And whether the judging result of lip-region is true.
Optionally, when the face occlusion detection program is executed by processor, following operation is also realized:
When the eye disaggregated model of the face, face lip disaggregated model to the ocular and lip-region
When judging result is true, judge that the face in the real-time face image does not block;And
When the eye disaggregated model of the face, face lip disaggregated model to the ocular and lip-region
When judging result includes untrue, the face in the real-time face image is prompted to block.
Optionally, the training step of the facial averaging model includes:
The first sample library for there are n facial images is established, t face feature point is marked in every facial image,
The t face feature point includes: the t for representing eye locations1A eye socket characteristic point, t2A eyeball characteristic point and represent lip position
The t set3A lip characteristic point;And
Face characteristic identification model is trained to obtain facial averaging model using the t face feature point;
Optionally, the eye disaggregated model of the face and the training step of lip disaggregated model include:
The human eye positive sample image of the first quantity and the human eye negative sample image of the second quantity are collected, is extracting every human eye just
The local feature of sample image, human eye negative sample image;
Using human eye positive sample image, human eye negative sample image and its local feature to supporting vector classifier (SVM)
It is trained, obtains the eye disaggregated model of face;
The lip positive sample image of third quantity and the lip negative sample image of the 4th quantity are collected, is extracting every lip just
The local feature of sample image, lip negative sample image;And
Using lip positive sample image, lip negative sample image and its local feature to supporting vector classifier (SVM) into
Row training, obtains the lip disaggregated model of face.
The specific embodiment of the computer readable storage medium of the present invention is specific with above-mentioned face occlusion detection method
Embodiment is roughly the same, and details are not described herein.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, device, article or the method that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, device, article or method institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, device of element, article or method.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.Pass through above embodiment party
The description of formula, it is required general that those skilled in the art can be understood that above-described embodiment method can add by software
The mode of hardware platform is realized, naturally it is also possible to which by hardware, but in many cases, the former is more preferably embodiment.It is based on
Such understanding, substantially the part that contributes to existing technology can be with software product in other words for technical solution of the present invention
Form embody, which is stored in a storage medium (such as ROM/RAM, magnetic disk, light as described above
Disk) in, including some instructions use is so that a terminal device (can be mobile phone, computer, server or the network equipment
Deng) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (9)
1. a kind of electronic device, which is characterized in that described device includes: memory, processor and photographic device, the memory
In include face occlusion detection program, the face occlusion detection program realizes following steps when being executed by the processor:
Image acquisition step: the realtime graphic of photographic device shooting is obtained, is mentioned from the realtime graphic using face recognition algorithms
Take a real-time face image;
Feature point recognition step: flat using the face by real-time face image input facial averaging model trained in advance
Equal model identifies t face feature point from the real-time face image, and the training step of the face averaging model includes:
The first sample library for there are n facial images is established, t face feature point, the t are marked in every facial image
A face feature point includes: the t for representing eye locations1A eye socket characteristic point, t2A eyeball characteristic point and the t for representing lip position3
A lip characteristic point, the t feature point group in every facial image obtain training dataset at a shape eigenvectors S
(I1, S1) ..., (In, Sn), wherein I is facial image, S be feature point group in facial image at shape eigenvectors;
And
Face characteristic identification model is trained using the training dataset to obtain facial averaging model, which knows
Other model is ERT algorithm, formula are as follows:Wherein t indicates cascade serial number, τt() indicates to work as
The recurrence device of prime, each recurrence device are made of more regression trees,Estimate for the shape of "current" model;Each recurrence device τt
() according to input picture I andTo predict an increment
During model training, the feature of the composition of the t face feature point according to facial image in sample database
Vector S trains first regression tree, by the residual of the predicted value of first regression tree and the true value of the t face feature point
Difference is used to train second tree;And so on, until train the N tree predicted value and the t face feature point it is true
The residual error of real value obtains all regression trees of ERT algorithm close to 0, obtains the face averaging model according to the regression tree;And
Characteristic area judgment step: ocular and lip-region are determined according to the location information of the t face feature point, by this
Ocular and the lip-region input the lip disaggregated model of the eye disaggregated model of trained face, face in advance, sentence
The authenticity for the ocular and the lip-region of breaking, and judge whether the face in the realtime graphic occurs according to judging result
It blocks.
2. electronic device according to claim 1, which is characterized in that the face occlusion detection program is by the processor
When execution, following steps are also realized:
Judgment step: the lip disaggregated model of the eye disaggregated model, face that judge the face is to the ocular and lip
Whether the judging result in portion region is true.
3. electronic device according to claim 1 or 2, which is characterized in that the face occlusion detection program is by the place
When managing device execution, following steps are also realized:
When judgement of the lip disaggregated model to the ocular and lip-region of the eye disaggregated model of the face, face
When result is true, judge that the face in the real-time face image does not block;And
When judgement of the lip disaggregated model to the ocular and lip-region of the eye disaggregated model of the face, face
When including as a result untrue, the face in the real-time face image is prompted to block.
4. electronic device according to claim 1, which is characterized in that the eye disaggregated model and lip of the face are classified
The training step of model includes:
The human eye positive sample image of the first quantity and the human eye negative sample image of the second quantity are collected, every human eye positive sample is extracted
The local feature of image, human eye negative sample image;
Supporting vector classifier (SVM) is carried out using human eye positive sample image, human eye negative sample image and its local feature
Training, obtains the eye disaggregated model of face;
The lip positive sample image of third quantity and the lip negative sample image of the 4th quantity are collected, every lip positive sample is extracted
The local feature of image, lip negative sample image;And
Supporting vector classifier (SVM) is instructed using lip positive sample image, lip negative sample image and its local feature
Practice, obtains the lip disaggregated model of face.
5. a kind of face occlusion detection method, which is characterized in that the described method includes:
Image acquisition step: the realtime graphic of photographic device shooting is obtained, is mentioned from the realtime graphic using face recognition algorithms
Take a real-time face image;
Feature point recognition step: flat using the face by real-time face image input facial averaging model trained in advance
Equal model identifies t face feature point from the real-time face image, and the training step of the face averaging model includes: to establish
One has the first sample library of n facial images, and t face feature point is marked in every facial image, and the t face is special
Sign point includes: the t for representing eye locations1A eye socket characteristic point, t2A eyeball characteristic point and the t for representing lip position3A lip is special
Levy point, the t feature point group in every facial image at a shape eigenvectors S, obtain training dataset (I1,
S1) ..., (In, Sn), wherein I is facial image, S be feature point group in facial image at shape eigenvectors;And
Face characteristic identification model is trained using the training dataset to obtain facial averaging model, which knows
Other model is ERT algorithm, formula are as follows:Wherein t indicates cascade serial number, τt() indicates to work as
The recurrence device of prime, each recurrence device are made of more regression trees,Estimate for the shape of "current" model;Each recurrence device τt
() according to input picture I andTo predict an increment
During model training, the feature of the composition of the t face feature point according to facial image in sample database
Vector S trains first regression tree, by the residual of the predicted value of first regression tree and the true value of the t face feature point
Difference is used to train second tree;And so on, until train the N tree predicted value and the t face feature point it is true
The residual error of real value obtains all regression trees of ERT algorithm close to 0, obtains the face averaging model according to the regression tree;And
Characteristic area judgment step: ocular and lip-region are determined according to the location information of the t face feature point, by this
Ocular and the lip-region input the lip disaggregated model of the eye disaggregated model of trained face, face in advance, sentence
The authenticity for the ocular and the lip-region of breaking, and judge whether the face in the realtime graphic occurs according to judging result
It blocks.
6. face occlusion detection method according to claim 5, which is characterized in that this method further include:
Judgment step: the lip disaggregated model of the eye disaggregated model, face that judge the face is to the ocular and lip
Whether the judging result in portion region is true.
7. face occlusion detection method according to claim 5 or 6, which is characterized in that this method further include:
When judgement of the lip disaggregated model to the ocular and lip-region of the eye disaggregated model of the face, face
When result is true, judge that the face in the real-time face image does not block;And
When judgement of the lip disaggregated model to the ocular and lip-region of the eye disaggregated model of the face, face
When including as a result untrue, the face in the real-time face image is prompted to block.
8. face occlusion detection method according to claim 5, which is characterized in that the eye disaggregated model of the face and
The training step of lip disaggregated model includes:
The human eye positive sample image of the first quantity and the human eye negative sample image of the second quantity are collected, every human eye positive sample is extracted
The local feature of image, human eye negative sample image;
Supporting vector classifier (SVM) is carried out using human eye positive sample image, human eye negative sample image and its local feature
Training, obtains the eye disaggregated model of face;
The lip positive sample image of third quantity and the lip negative sample image of the 4th quantity are collected, every lip positive sample is extracted
The local feature of image, lip negative sample image;And
Supporting vector classifier (SVM) is instructed using lip positive sample image, lip negative sample image and its local feature
Practice, obtains the lip disaggregated model of face.
9. a kind of computer readable storage medium, which is characterized in that blocked in the computer readable storage medium including face
Program is detected, when the face occlusion detection program is executed by processor, is realized as described in any one of claim 5 to 8
The step of face occlusion detection method.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710707944.6A CN107633204B (en) | 2017-08-17 | 2017-08-17 | Face occlusion detection method, apparatus and storage medium |
PCT/CN2017/108751 WO2019033572A1 (en) | 2017-08-17 | 2017-10-31 | Method for detecting whether face is blocked, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710707944.6A CN107633204B (en) | 2017-08-17 | 2017-08-17 | Face occlusion detection method, apparatus and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107633204A CN107633204A (en) | 2018-01-26 |
CN107633204B true CN107633204B (en) | 2019-01-29 |
Family
ID=61099639
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710707944.6A Active CN107633204B (en) | 2017-08-17 | 2017-08-17 | Face occlusion detection method, apparatus and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107633204B (en) |
WO (1) | WO2019033572A1 (en) |
Families Citing this family (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108664908A (en) * | 2018-04-27 | 2018-10-16 | 深圳爱酷智能科技有限公司 | Face identification method, equipment and computer readable storage medium |
CN110472459B (en) * | 2018-05-11 | 2022-12-27 | 华为技术有限公司 | Method and device for extracting feature points |
CN108551552B (en) * | 2018-05-14 | 2020-09-01 | Oppo广东移动通信有限公司 | Image processing method, device, storage medium and mobile terminal |
CN108763897A (en) * | 2018-05-22 | 2018-11-06 | 平安科技(深圳)有限公司 | Method of calibration, terminal device and the medium of identity legitimacy |
CN108985159A (en) * | 2018-06-08 | 2018-12-11 | 平安科技(深圳)有限公司 | Human-eye model training method, eye recognition method, apparatus, equipment and medium |
CN110119674B (en) * | 2019-03-27 | 2023-05-12 | 深圳数联天下智能科技有限公司 | Method, device, computing equipment and computer storage medium for detecting cheating |
CN110084191B (en) * | 2019-04-26 | 2024-02-23 | 广东工业大学 | Eye shielding detection method and system |
CN111860047B (en) * | 2019-04-26 | 2024-06-11 | 美澳视界(厦门)智能科技有限公司 | Face rapid recognition method based on deep learning |
CN110348331B (en) * | 2019-06-24 | 2022-01-14 | 深圳数联天下智能科技有限公司 | Face recognition method and electronic equipment |
CN112183173B (en) * | 2019-07-05 | 2024-04-09 | 北京字节跳动网络技术有限公司 | Image processing method, device and storage medium |
CN110428399B (en) | 2019-07-05 | 2022-06-14 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device and storage medium for detecting image |
CN110414394B (en) * | 2019-07-16 | 2022-12-13 | 公安部第一研究所 | Facial occlusion face image reconstruction method and model for face occlusion detection |
CN110543823B (en) * | 2019-07-30 | 2024-03-19 | 平安科技(深圳)有限公司 | Pedestrian re-identification method and device based on residual error network and computer equipment |
CN112929638B (en) * | 2019-12-05 | 2023-12-15 | 北京芯海视界三维科技有限公司 | Eye positioning method and device and multi-view naked eye 3D display method and device |
CN111353404B (en) * | 2020-02-24 | 2023-12-01 | 支付宝实验室(新加坡)有限公司 | Face recognition method, device and equipment |
CN111428581B (en) * | 2020-03-05 | 2023-11-21 | 平安科技(深圳)有限公司 | Face shielding detection method and system |
CN113449562A (en) * | 2020-03-26 | 2021-09-28 | 北京沃东天骏信息技术有限公司 | Face pose correction method and device |
CN111414879B (en) * | 2020-03-26 | 2023-06-09 | 抖音视界有限公司 | Face shielding degree identification method and device, electronic equipment and readable storage medium |
CN111489373B (en) * | 2020-04-07 | 2023-05-05 | 北京工业大学 | Occlusion object segmentation method based on deep learning |
CN111461047A (en) * | 2020-04-10 | 2020-07-28 | 北京爱笔科技有限公司 | Identity recognition method, device, equipment and computer storage medium |
CN111486961B (en) * | 2020-04-15 | 2023-05-09 | 贵州安防工程技术研究中心有限公司 | Efficient forehead temperature estimation method based on wide-spectrum human forehead imaging and distance sensing |
CN111444887A (en) * | 2020-04-30 | 2020-07-24 | 北京每日优鲜电子商务有限公司 | Mask wearing detection method and device, storage medium and electronic equipment |
CN111598021B (en) * | 2020-05-19 | 2021-05-28 | 北京嘀嘀无限科技发展有限公司 | Wearing detection method and device for face shield, electronic equipment and storage medium |
CN111598018A (en) * | 2020-05-19 | 2020-08-28 | 北京嘀嘀无限科技发展有限公司 | Wearing detection method, device, equipment and storage medium for face shield |
CN111626193A (en) * | 2020-05-26 | 2020-09-04 | 北京嘀嘀无限科技发展有限公司 | Face recognition method, face recognition device and readable storage medium |
CN111626240B (en) * | 2020-05-29 | 2023-04-07 | 歌尔科技有限公司 | Face image recognition method, device and equipment and readable storage medium |
CN111639596B (en) * | 2020-05-29 | 2023-04-28 | 上海锘科智能科技有限公司 | Glasses-shielding-resistant face recognition method based on attention mechanism and residual error network |
CN111814571B (en) * | 2020-06-12 | 2024-07-12 | 深圳禾思众成科技有限公司 | Mask face recognition method and system based on background filtering |
CN111881740B (en) * | 2020-06-19 | 2024-03-22 | 杭州魔点科技有限公司 | Face recognition method, device, electronic equipment and medium |
CN113963394A (en) * | 2020-07-03 | 2022-01-21 | 北京君正集成电路股份有限公司 | Face recognition method under lower half shielding condition |
CN113963393A (en) * | 2020-07-03 | 2022-01-21 | 北京君正集成电路股份有限公司 | Face recognition method under condition of wearing sunglasses |
CN112052730B (en) * | 2020-07-30 | 2024-03-29 | 广州市标准化研究院 | 3D dynamic portrait identification monitoring equipment and method |
CN114078270B (en) * | 2020-08-19 | 2024-09-06 | 上海新氦类脑智能科技有限公司 | Face identity verification method, device, equipment and medium based on shielding environment |
CN112016464B (en) * | 2020-08-28 | 2024-04-12 | 中移(杭州)信息技术有限公司 | Method and device for detecting face shielding, electronic equipment and storage medium |
CN112116525B (en) * | 2020-09-24 | 2023-08-04 | 百度在线网络技术(北京)有限公司 | Face recognition method, device, equipment and computer readable storage medium |
CN112597886A (en) * | 2020-12-22 | 2021-04-02 | 成都商汤科技有限公司 | Ride fare evasion detection method and device, electronic equipment and storage medium |
CN112633183B (en) * | 2020-12-25 | 2023-11-14 | 平安银行股份有限公司 | Automatic detection method and device for image shielding area and storage medium |
CN112418190B (en) * | 2021-01-21 | 2021-04-02 | 成都点泽智能科技有限公司 | Mobile terminal medical protective shielding face recognition method, device, system and server |
CN112766214A (en) * | 2021-01-29 | 2021-05-07 | 北京字跳网络技术有限公司 | Face image processing method, device, equipment and storage medium |
CN112949418A (en) * | 2021-02-05 | 2021-06-11 | 深圳市优必选科技股份有限公司 | Method and device for determining speaking object, electronic equipment and storage medium |
CN112966654B (en) * | 2021-03-29 | 2023-12-19 | 深圳市优必选科技股份有限公司 | Lip movement detection method, lip movement detection device, terminal equipment and computer readable storage medium |
CN113111817B (en) * | 2021-04-21 | 2023-06-27 | 中山大学 | Semantic segmentation face integrity measurement method, system, equipment and storage medium |
CN113449696B (en) * | 2021-08-27 | 2021-12-07 | 北京市商汤科技开发有限公司 | Attitude estimation method and device, computer equipment and storage medium |
CN113762136A (en) * | 2021-09-02 | 2021-12-07 | 北京格灵深瞳信息技术股份有限公司 | Face image occlusion judgment method and device, electronic equipment and storage medium |
CN114399813B (en) * | 2021-12-21 | 2023-09-26 | 马上消费金融股份有限公司 | Face shielding detection method, model training method, device and electronic equipment |
CN114462495B (en) * | 2021-12-30 | 2023-04-07 | 浙江大华技术股份有限公司 | Training method of face shielding detection model and related device |
CN117275075B (en) * | 2023-11-01 | 2024-02-13 | 浙江同花顺智能科技有限公司 | Face shielding detection method, system, device and storage medium |
CN117282038B (en) * | 2023-11-22 | 2024-02-13 | 杭州般意科技有限公司 | Light source adjusting method and device for eye phototherapy device, terminal and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542246A (en) * | 2011-03-29 | 2012-07-04 | 广州市浩云安防科技股份有限公司 | Abnormal face detection method for ATM (Automatic Teller Machine) |
CN104463172A (en) * | 2014-12-09 | 2015-03-25 | 中国科学院重庆绿色智能技术研究院 | Face feature extraction method based on face feature point shape drive depth model |
CN105654049A (en) * | 2015-12-29 | 2016-06-08 | 中国科学院深圳先进技术研究院 | Facial expression recognition method and device |
CN105868689A (en) * | 2016-02-16 | 2016-08-17 | 杭州景联文科技有限公司 | Cascaded convolutional neural network based human face occlusion detection method |
CN106056079A (en) * | 2016-05-31 | 2016-10-26 | 中国科学院自动化研究所 | Image acquisition device and facial feature occlusion detection method |
CN106295566A (en) * | 2016-08-10 | 2017-01-04 | 北京小米移动软件有限公司 | Facial expression recognizing method and device |
CN106485215A (en) * | 2016-09-29 | 2017-03-08 | 西交利物浦大学 | Face occlusion detection method based on depth convolutional neural networks |
CN106910176A (en) * | 2017-03-02 | 2017-06-30 | 中科视拓(北京)科技有限公司 | A kind of facial image based on deep learning removes occlusion method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102306304B (en) * | 2011-03-25 | 2017-02-08 | 上海星尘电子科技有限公司 | Face occluder identification method and device |
CN102270308B (en) * | 2011-07-21 | 2013-09-11 | 武汉大学 | Facial feature location method based on five sense organs related AAM (Active Appearance Model) |
-
2017
- 2017-08-17 CN CN201710707944.6A patent/CN107633204B/en active Active
- 2017-10-31 WO PCT/CN2017/108751 patent/WO2019033572A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542246A (en) * | 2011-03-29 | 2012-07-04 | 广州市浩云安防科技股份有限公司 | Abnormal face detection method for ATM (Automatic Teller Machine) |
CN104463172A (en) * | 2014-12-09 | 2015-03-25 | 中国科学院重庆绿色智能技术研究院 | Face feature extraction method based on face feature point shape drive depth model |
CN105654049A (en) * | 2015-12-29 | 2016-06-08 | 中国科学院深圳先进技术研究院 | Facial expression recognition method and device |
CN105868689A (en) * | 2016-02-16 | 2016-08-17 | 杭州景联文科技有限公司 | Cascaded convolutional neural network based human face occlusion detection method |
CN106056079A (en) * | 2016-05-31 | 2016-10-26 | 中国科学院自动化研究所 | Image acquisition device and facial feature occlusion detection method |
CN106295566A (en) * | 2016-08-10 | 2017-01-04 | 北京小米移动软件有限公司 | Facial expression recognizing method and device |
CN106485215A (en) * | 2016-09-29 | 2017-03-08 | 西交利物浦大学 | Face occlusion detection method based on depth convolutional neural networks |
CN106910176A (en) * | 2017-03-02 | 2017-06-30 | 中科视拓(北京)科技有限公司 | A kind of facial image based on deep learning removes occlusion method |
Non-Patent Citations (2)
Title |
---|
One millisecond face alignment with an ensemble of regression trees;V Kazemi 等;《Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition》;20141231;第1867-1874页 |
基于多任务特征选择和自适应模型的人脸特征点检测;谢郑楠;《中国优秀硕士学位论文全文数据库信息科技辑》;20170215;第I138-2636页 |
Also Published As
Publication number | Publication date |
---|---|
CN107633204A (en) | 2018-01-26 |
WO2019033572A1 (en) | 2019-02-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107633204B (en) | Face occlusion detection method, apparatus and storage medium | |
CN107679448B (en) | Eyeball action-analysing method, device and storage medium | |
CN107633207B (en) | AU characteristic recognition methods, device and storage medium | |
CN109961009B (en) | Pedestrian detection method, system, device and storage medium based on deep learning | |
US10635946B2 (en) | Eyeglass positioning method, apparatus and storage medium | |
Maglogiannis et al. | Face detection and recognition of natural human emotion using Markov random fields | |
CN107633206B (en) | Eyeball motion capture method, device and storage medium | |
CN107679447A (en) | Facial characteristics point detecting method, device and storage medium | |
CN112052186B (en) | Target detection method, device, equipment and storage medium | |
CN108399665A (en) | Method for safety monitoring, device based on recognition of face and storage medium | |
US10489636B2 (en) | Lip movement capturing method and device, and storage medium | |
CN107633205B (en) | lip motion analysis method, device and storage medium | |
CN103617432A (en) | Method and device for recognizing scenes | |
JP2008146539A (en) | Face authentication device | |
Kalas | Real time face detection and tracking using OpenCV | |
Dantone et al. | Augmented faces | |
Lahiani et al. | Hand pose estimation system based on Viola-Jones algorithm for android devices | |
CN115223239B (en) | Gesture recognition method, gesture recognition system, computer equipment and readable storage medium | |
CN110175500B (en) | Finger vein comparison method, device, computer equipment and storage medium | |
KR100847142B1 (en) | Preprocessing method for face recognition, face recognition method and apparatus using the same | |
Lahiani et al. | Hand gesture recognition system based on local binary pattern approach for mobile devices | |
Alhamazani et al. | [Retracted] Using Depth Cameras for Recognition and Segmentation of Hand Gestures | |
Lahiani et al. | Comparative study beetween hand pose estimation systems for mobile devices. | |
CN112487232B (en) | Face retrieval method and related products | |
Kartbayev et al. | Development of a computer system for identity authentication using artificial neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1246921 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |