CN108038476B - A kind of facial expression recognition feature extracting method based on edge detection and SIFT - Google Patents
A kind of facial expression recognition feature extracting method based on edge detection and SIFT Download PDFInfo
- Publication number
- CN108038476B CN108038476B CN201810004825.9A CN201810004825A CN108038476B CN 108038476 B CN108038476 B CN 108038476B CN 201810004825 A CN201810004825 A CN 201810004825A CN 108038476 B CN108038476 B CN 108038476B
- Authority
- CN
- China
- Prior art keywords
- pixel
- object class
- value
- image
- sift
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of facial expression recognition feature extracting method based on edge detection and SIFT, comprising: obtains the image comprising face;Background classes and object class are divided an image into, obtain the subgraph about face information, i.e. object class subgraph after carrying out edge detection;The characteristic point in object class subgraph is extracted, generates in object class subgraph and describes son about the SIFT of expression information.The present invention gets rid of background equal influence caused by important information in characteristic extraction procedure, SIFT description that feature point extraction obtains is carried out to image, it is high for the stability of noise, illumination, partial occlusion and affine transformation etc., SIFT description in the image extracted completes the matching of image, dimensionality reduction is carried out using PCA simultaneously, it accelerates and extracts feature and matched time, matched efficiency and accuracy rate are improved, the identification, image recognition of face and image for being highly suitable for image compare etc..
Description
Technical field
The invention belongs to facial characteristics identification technology field, especially a kind of human face expression based on edge detection and SIFT
Identification feature extracting method.
Background technique
Expression can propagate the idea of the mankind and the mode of emotion, it includes a large amount of valuable information;Human face expression is known
It is not the technology for carrying out expression identification automatically according to face, has an individual difference and more stable for oneself itself according to face
The characteristics of, it can identify human face expression;Its realization finds two characteristic points mainly by the comparison of image and information point
Between mapping relations process.However, due to only existing small difference between human face expression, positioning feature point inaccuracy etc. is lacked
Point, simple is suitable for facial expression recognition from traditional face recognition method, and effect is undesirable, and efficiency is lower, thereupon,
Facial expression recognition can be applied to the fields such as public safety, access control;Therefore, realize that a kind of facial expression recognizing method is one
A important research direction.
The purpose of image segmentation is that whole picture is divided into parts of images block, realizes that preliminary aim extracts.Its one kind
Important channel is to detect gray level by edge detection or structure has the place of mutation, indicate the end in region, in this way may be used
Face is preferably carried out piecemeal, become positioning eyebrow, eyes, nose, the corners of the mouth, the detailed information such as canthus committed step it
One.
Summary of the invention
The purpose of the present invention is to provide a kind of facial expression recognition feature extracting method based on edge detection and SIFT.
Technical scheme is as follows:
A kind of facial expression recognition feature extracting method based on edge detection and SIFT, comprising:
Obtain the image comprising face;
Background classes and object class are divided an image into, obtain the subgraph about face information after carrying out edge detection, i.e.,
Object class subgraph;
The characteristic point in object class subgraph is extracted, generates in object class subgraph and is described about the SIFT of expression information
Son.
The edge detection, comprising:
The number of pixels of background classes and object class in statistical picture;
Computing object class pixel grey scale intermediate value and background classes pixel grey scale intermediate value;
Pixel set blurring to image;
Calculate the degree of background classes class pixel grey scale intermediate value corresponding with grey scale pixel value deviation in object class;
Object class determines the edge of objects in images class subgraph at a distance from background classes when minimum, obtain object class
Image, the subgraph are the subgraphs about face information.
The number of pixels of background classes and object class in the statistical picture, specifically: reflection gray scale point is obtained according to image
The grey level histogram of cloth and Frequency of gray levels information;The all pixels of image are divided into two classes using given threshold value, are greater than given
One kind of threshold value is known as object class, and one kind less than given threshold value is known as background classes.
The computing object class pixel grey scale intermediate value and background classes pixel grey scale intermediate value, specifically:
It counts, counts the minimum gray value of pixel since the minimum gray value and background classes of pixel in object class respectively
The gray value of pixel of number of pixels when reaching the half of sum of all pixels in corresponding class be corresponding class pixel grey scale intermediate value.
The object class is calculated at a distance from background classes by the distance function of following object class and background classes:
In formula, J is the distance function of object class and background classes;The gray value of all pixels deviates object class picture in object class
The sum of degree of plain gray scale intermediate value extraction of square rootIndicate that the gray value of all pixels in object class is grey to object class pixel
Spend the distance of intermediate value;The gray value of all pixels deviates the sum of degree of background classes pixel grey scale intermediate value extraction of square root in background classesIndicate the gray value of all pixels in background classes to the distance of background classes pixel grey scale intermediate value.
The characteristic point extracted in object class subgraph generates the SIFT in object class subgraph about expression information and retouches
State son, comprising:
The detection of gray scale extreme point is carried out based on scale space constructed by the pixel on object class subgraph, by gray scale extreme value
Point is used as candidate feature point;
Candidate feature point is screened, smothing filtering is carried out to the gradient direction of characteristic point;
SIFT description about expression information is generated for each characteristic point and sub- dimension-reduction treatment is described to SIFT.
Scale space constructed by the pixel based on object class subgraph carries out the detection of gray scale extreme point, comprising:
The two-dimensional Gaussian kernel of each pixel on computing object class subgraph;
Scale space is constructed respectively for each pixel on object class subgraph, and obtained all scale spaces are constituted
Pyramid;
The gray scale extreme point in pyramid is determined, as candidate feature point.
Gray scale extreme point in the determining pyramid, comprising:
By each 4 adjacent pixels of level or so where a pixel of middle layer and its and middle layer in pyramid
Gray value compared with 18 pixels corresponding with above-mentioned 9 pixels in neighbouring level, if the gray value of current pixel is this
Maximum value in 26 grey scale pixel values, label current pixel are gray scale extreme point, record its position, curvature and scale, will work as
Preceding gray scale extreme point is determined as candidate feature point;Otherwise give up the gray scale extreme point, found using next pixel of middle layer
Gray scale extreme point.
It is described that candidate feature point is screened, comprising:
The candidate feature point that scale is less than given threshold is rejected, then the candidate feature point to edge sensitive is removed, is obtained
To final characteristic point;
The candidate feature point to edge sensitive, comprising: Hessian matrix value is the candidate feature point of negative,
Principal curvatures is not less than the candidate feature point of given threshold in Hessian matrix.
It is described that sub- dimension-reduction treatment is described to SIFT, comprising:
The SIFT description son composition SIFT feature vector matrix of several characteristic points;
Calculate SIFT feature vector matrix mean value and covariance matrix;
The feature vector and characteristic value for acquiring covariance matrix, by feature vector group corresponding to maximum k characteristic value
At transformation matrix, transformation matrix is multiplied with SIFT description, realizes dimensionality reduction.
The utility model has the advantages that
The present invention gets rid of background equal influence caused by important information in characteristic extraction procedure, and SIFT is utilized to describe son
The scale invariability that has, the rotation to image, translation, illumination and affine transformation to a certain extent have certain constant
Property the advantages of, the obtained SIFT of feature point extraction is carried out to image and describes son, for noise, illumination, partial occlusion and affine
The stability of transformation etc. is high, and SIFT description in the image extracted completes the matching of image, while being dropped using PCA
Dimension accelerates and extracts feature and matched time, improves matched efficiency and accuracy rate, be highly suitable for the knowledge of image
Not, image recognition of face and image compare etc., overcome in image rotation, illumination of face etc. to recognition of face
It influences;This method can be applied to the research fields such as image procossing.
Detailed description of the invention
Fig. 1 is the method overview flow chart of the specific embodiment of the invention;
Fig. 2 is the edge detection flow chart of the specific embodiment of the invention;
Fig. 3 is the specific flow chart of the step 3 of the specific embodiment of the invention.
Specific embodiment
Specific embodiments of the present invention will be described in detail with reference to the accompanying drawing.
Present embodiment provides a kind of facial expression recognition feature extraction based on edge detection and SIFT as shown in Figure 1
Method, comprising:
Step 1 is acquired face picture by digital camera, mobile phone or monitoring device, obtains the figure comprising face
Picture.
Step 2 divides an image into background classes and object class, obtains the subgraph about face information after carrying out edge detection
Picture, i.e. object class subgraph.
Edge detection process as shown in Figure 2 is as follows:
The number of pixels of background classes and object class in step 2.1, statistical picture.
According to step 1 facial image obtained, the intensity histogram of reflection intensity profile and Frequency of gray levels information is obtained
Figure.If L is total number of pixels of the image, His [i] indicates the number of pixels that gray value is i in the image, utilizes given threshold value
The all pixels of image are divided into two classes by th, and one kind greater than given threshold value th is known as object class, less than the one of given threshold value th
Class is known as background classes.The number of pixels Sum of object class is obtained according to grey level histogramObjectAnd the number of pixels of background classes
SumBackcround。
Calculation formula is as follows:
Step 2.2, computing object class pixel grey scale intermediate value and background classes pixel grey scale intermediate value.
Based on reflection intensity profile and Frequency of gray levels information grey level histogram, respectively from object class pixel gray scale
The minimum gray value of minimum value and pixel in background classes starts to count, and the number of pixels of statistics reaches sum of all pixels in corresponding class
The gray value of pixel when half is corresponding class pixel grey scale intermediate value, ifFor object class pixel grey scale intermediate value,For background classes pixel grey scale intermediate value, ifIndicate number of pixels corresponding to object class pixel grey scale intermediate value,Indicate number of pixels corresponding to background classes pixel grey scale intermediate value.
Calculation formula is as follows:
Step 2.3 is blurred the pixel set of image, indicates pixel using the membership function of the pixel set of blurring
Belong to the degree of object class, the probability that the bigger pixel of membership function value belongs to object class is bigger.
Membership function is as follows:
In formula, μijIndicate that the pixel (m, n) in image belongs to the degree of object class, xijIndicate the pixel in image
The gray value of (m, n), xmax、xminThe maximum gradation value of pixel, the smallest gray value respectively in image.
Step 2.4, the degree for calculating pixel grey scale intermediate value in background classes class corresponding with grey scale pixel value deviation in object class.
Pobject=his [i]/SumObject
PBackGround=his [i]/SumBackcround
In formula, VObiectDeviate the sum of the degree of object class pixel grey scale intermediate value for the gray value of all pixels in object class;
VBackGroundDeviate the sum of the degree of background classes pixel grey scale intermediate value, PO for all pixels gray value in background classesbiectIndicate ash
Angle value is the probability that the pixel of i occurs in object class, PBackGroundIndicate that gray value is what the pixel of i occurred in background classes
Probability, μ (i) indicate that gray value is the subjection degree that the pixel of i is under the jurisdiction of corresponding class,Indicate that gray value is pair
As the pixel of class pixel grey scale intermediate value is under the jurisdiction of the subjection degree of object class,Expression gray value is background classes picture
The pixel of plain gray scale intermediate value is under the jurisdiction of the subjection degree of background classes.
Step 2.5, object class determine the edge of objects in images class subgraph at a distance from background classes when minimum, obtain
Object class subgraph, the subgraph are the subgraphs about face information.
It can be using following object class at a distance from the distance function computing object class and background classes of background classes:
In formula, J is the distance function of object class and background classes, specifically with the gray scale of all pixels in object class and background classes
It is worth square expressing for the distance sum of corresponding class gray scale intermediate value;The gray value of all pixels deviates object class pixel in object class
The sum of degree of gray scale intermediate value extraction of square rootIndicate the gray value of all pixels in object class to object class pixel grey scale
The distance of intermediate value, the gray value of all pixels deviates the sum of degree of background classes pixel grey scale intermediate value extraction of square root in background classesIndicate the gray value of all pixels in background classes to the distance of background classes pixel grey scale intermediate value.J is unfolded to obtain
?Reflect the difference degree of two classes, the bigger variance difference for indicating two classes of the value is smaller, obtains
The segmented image arrived is more undesirable, and J considers the collection between object class and background classes comprehensively and neutralizes difference degree.Pass through J (th*)
The distance function minimum value of object class and background classes is acquired, and then determines the best of background classes subgraph and object class subgraph
Segmentation threshold.
Characteristic point in step 3, extraction object class subgraph, generates the SIFT in object class subgraph about expression information
Description.
The detailed process of step 3 as shown in Figure 3 is as follows:
Step 3.1 carries out the detection of gray scale extreme point based on scale space constructed by the pixel on object class subgraph, will
Gray scale extreme point is as candidate feature point.
Step 3.1.1, on computing object class subgraph each pixel two-dimensional Gaussian kernel;
If I (x, y) indicates that object class subgraph, (x, y) are any pixel on object class subgraph, L (x, y, σ) is I
A kind of representation of (x, y), wherein σ is the standard deviation of Gaussian kernel, and * is the volume on the direction x and y of object class subgraph
Product.Calculate following two-dimensional Gaussian kernel G (x, y, σ):
L (x, y, σ)=G (x, y, σ) * I (x, y)
Step 3.1.2, scale space DoG is constructed respectively for the pixel on object class subgraph, indicated with D (x, y, σ),
Expression formula are as follows: D (x, y, σ)=(G (x, y, k σ)-G (x, y, σ)) * I (x, y)=L (x, y, k σ)-L (x, y, σ), k is normal in formula
Number, is helped
Step 3.1.1~3.1.2, obtained all scale space DoG are executed for each pixel of object class subgraph
Constitute DoG pyramid.
Step 3.1.3, the gray scale extreme point in DoG pyramid is determined, as candidate feature point:, will in DoG pyramid
In one pixel of middle layer and each 4 adjacent pixels of level or so where it and the neighbouring level of middle layer with it is upper
State corresponding 18 pixels of 9 pixels and compare gray value, if the gray value of current pixel be this 26 grey scale pixel values in most
Big value, label current pixel are gray scale extreme point, record its position, curvature and scale, current gray level extreme point is determined as waiting
Select characteristic point;Otherwise give up the gray scale extreme point, find gray scale extreme point using next pixel of middle layer.
Step 3.2 screens candidate feature point, carries out smothing filtering to the gradient direction of characteristic point.
Step 3.2.1, the candidate feature point that scale is less than given threshold is rejected;
Scale-space representation formula D (x, y, σ) Taylor expansion of candidate feature point is first up to quadratic term, is obtained such as following table
Up to formula:
D therein is the item that remaining order is higher than second order in Taylor expansion;X=(x, y, σ)TIt is the inclined of candidate feature point
Shifting amount, x, y are the transverse and longitudinal coordinate of candidate feature point respectively;
Derivation is carried out to the expression formula D (X) of Taylor expansion again, find out candidate feature point offset X of the derivative equal to 0 when ';
Then, above formula is all made of for each candidate feature point acquire offset
By the offset of candidate feature pointBring the expression formula D (X) of Taylor expansion into:
IfLower than given threshold, then the candidate feature point is got rid of, given threshold takes 0.03 in the present embodiment.
Step 3.2.2, it introduces Hessian matrix further to screen candidate feature point, gets rid of the spy to edge sensitive
Levy point: removal Hessian matrix value is the candidate feature point of negative, removes principal curvatures in Hessian matrix and is not less than setting threshold
The candidate feature point of value, obtains final characteristic point;
Hessian matrix is defined as:
Wherein:
DxxSecond order local derviation, D are asked to x for D (x, y, σ)xySingle order local derviation, D are asked to x, y respectively for D (x, y, σ)yyFor D (x, y,
Second order local derviation σ) is asked to y, constitutes Hessian matrix.
The effect of Hessian matrix is to find out principal curvatures, because unstable candidate feature point principal direction response can be long-range
In its vertical direction, if α=max (λ1, λ2..., λk), i.e. the corresponding characteristic value of Hessian matrix principal direction, β=min (λ1,
λ2..., λk), the as corresponding characteristic value of Hessian matrix vertical direction, it is only necessary to know the ratio between the two characteristic values, i.e., it is complete
At the screening of candidate feature point, wherein { λ1, λ2..., λkBe Hessian matrix characteristic value, the principal curvatures with D (X) is at just
Than.
Alpha+beta=Dxx+Dyy=Tr (H)
α β=DxxDyy-Dxy 2=Det (H)
Enable α=γ β, then:
Wherein Tr (H) is the mark of matrix, and Det (H) is the value of matrix, if the value of Det (H) is negative, by candidate spy
Sign point is rejected.
Unrelated with α and β for the principal curvatures of D (X), only related with γ, the present embodiment γ takes 10, ifThen weed out the candidate feature point.
Step 3.2.3, it chooses Gaussian function and smothing filtering is carried out to the gradient direction of characteristic point;
Gradient-norm m and gradient direction θ is calculated using pixel difference.
Wherein, m (x, y) indicates the gradient modulus value of candidate feature point, and θ (x, y) indicates the gradient direction of candidate feature point, L
For image and Gauss nuclear convolution.
It for each characteristic point, is sampled in the neighborhood centered on it, statistical gradient direction histogram, it will be current special
It levies each neighborhood territory pixel of point and is grouped on the pixel gradient direction of setting that (every 10 degree divide one by gradient direction in the form that weights
A pixel gradient direction), pixel gradient direction histogram is obtained, the main peak value in pixel gradient direction histogram is current special
The main pixel gradient direction of point is levied, the peak value that 80% main peak value is not less than in pixel gradient direction histogram is current signature point
Auxiliary pixel gradient direction;The weight of neighborhood territory pixel is the gradient-norm m of current signature point and the product of Gaussian function.
Step 3.3 generates SIFT description about expression information for each characteristic point and describes son drop to SIFT
Dimension processing.
Step 3.3.1, it is generated for each characteristic point and describes son about the SIFT of expression information: with each characteristic point
Centered on choose the region of 16*16 pixel and calculated left up and down in all subregion by this panel region to be bisected into 4*4 sub-regions
The gradient orientation histogram in right and diagonal totally 8 directions, obtains the seed of an accumulation gradient modulus value with 8 directions
Point, each characteristic point are described by 4*4 seed point, are formed the SIFT that characteristic point 128 is tieed up and are described son.
SIFT (Scale-invariant feature transform) has good effect to illumination, rotation etc.,
There is good effect in recognition of face, but since the characteristic point that SIFT is extracted is more, dimension is higher, for storage
Data and calculating aspect, influence the real-time of algorithm, moreover, correct matched characteristic point only has fraction, exist and mismatch a little, make
Recognition efficiency reduces.
Step 3.3.2, sub- dimension-reduction treatment is described to SIFT using PCA algorithm.
Principal Component Analysis (Principle Component Analysis, PCA) is a kind of dimension reduction method of standard, will
N Feature Dimension Reduction is to k (n > k);Image is regarded as the random vector that distribution has certain regularity, phase is constructed according to face
As feature, the distribution of random vector may not be arbitrarily, to pass through the available facial image distribution of Principal Component Analysis
Principal component method, for describing face.Its key property be can the information more being segmented it is as few as possible gather it is new
It combines in image, the disadvantage that it can overcome well SIFT dimension excessively high.
Step 3.3.2.1, the SIFT description son composition SIFT feature vector matrix of several characteristic points;
128 dimension SIFT description son composition SIFT feature vector matrix X of several characteristic pointsn=(x1, x2..., xn), dimension
Number is n × 128, x therein1, x2..., xnFor n feature vector;
Step 3.3.2.2, SIFT feature vector matrix mean value is calculatedAnd covariance matrix Cx:
Step 3.3.2.3, covariance matrix C is acquiredxFeature vector eiWith eigenvalue λi, by maximum k characteristic value ei
Corresponding feature vector forms transformation matrix A, and transformation matrix is multiplied with SIFT description, realizes dimensionality reduction.
Transformation matrix A dimension is k × 128, and k is 36 in the present embodiment, therefore forms the transformation matrix A of 36 × 128 dimensions.
yi=Axi
Wherein, yiFor 36 dimension SIFT description, xiFor 128 dimension SIFT description.
Claims (4)
1. a kind of facial expression recognition feature extracting method based on edge detection and SIFT characterized by comprising
Obtain the image comprising face;
Background classes and object class are divided an image into, obtain the subgraph about face information, i.e. object after carrying out edge detection
Class subgraph;
The characteristic point in object class subgraph is extracted, generates in object class subgraph and describes son about the SIFT of expression information;
The edge detection, comprising:
The number of pixels of background classes and object class in statistical picture;
Computing object class pixel grey scale intermediate value and background classes pixel grey scale intermediate value;
Pixel set blurring to image indicates that pixel belongs to object class using the membership function of the pixel set of blurring
Degree, the probability that the bigger pixel of membership function value belongs to object class are bigger;
Calculate the degree of background classes class pixel grey scale intermediate value corresponding with grey scale pixel value deviation in object class;
Object class determines the edge of objects in images class subgraph at a distance from background classes when minimum, obtain object class subgraph
Picture, the subgraph are the subgraphs about face information;
The membership function is as follows:
In formula, μijIndicate that the pixel (m, n) in image belongs to the degree of object class, xijIndicate the pixel (m, n) in image
Gray value, xmax、xminThe maximum gradation value of pixel, the smallest gray value respectively in image;
The characteristic point extracted in object class subgraph, generates in object class subgraph and describes about the SIFT of expression information
Son, comprising:
The detection of gray scale extreme point is carried out based on scale space constructed by the pixel on object class subgraph, gray scale extreme point is made
For candidate feature point;
Candidate feature point is screened, smothing filtering is carried out to the gradient direction of characteristic point;
SIFT description about expression information is generated for each characteristic point and sub- dimension-reduction treatment is described to SIFT;
Scale space constructed by the pixel based on object class subgraph carries out the detection of gray scale extreme point, comprising:
The two-dimensional Gaussian kernel of each pixel on computing object class subgraph;
Scale space is constructed respectively for each pixel on object class subgraph, and obtained all scale spaces constitute golden word
Tower;
The gray scale extreme point in pyramid is determined, as candidate feature point;
Gray scale extreme point in the determining pyramid, comprising:
By a pixel of pyramid middle layer, above and below 8 pixels and middle layer adjacent with the pixel in middle layer
Pixel corresponding with 9 pixels of above-mentioned middle layer in two adjacent levels amounts to 26 pixels, carries out gray value comparison, if
The gray value of current pixel is the maximum value in this 27 grey scale pixel values, and label current pixel is gray scale extreme point, records it
Current gray level extreme point is determined as candidate feature point by position, curvature and scale;Otherwise give up the gray scale extreme point, in utilization
Next pixel of interbed finds gray scale extreme point;
It is described that candidate feature point is screened, comprising:
The candidate feature point that scale is less than given threshold is rejected, then the candidate feature point to edge sensitive is removed, is obtained most
Whole characteristic point;
The candidate feature point to edge sensitive, comprising: Hessian matrix value is the candidate feature point of negative, Hessian square
Principal curvatures is not less than the candidate feature point of given threshold in battle array;
It is described that sub- dimension-reduction treatment is described to SIFT, comprising:
The SIFT description son composition SIFT feature vector matrix of several characteristic points;
Calculate SIFT feature vector matrix mean value and covariance matrix;
Feature vector corresponding to maximum k characteristic value is formed and is become by the feature vector and characteristic value for acquiring covariance matrix
Matrix is changed, transformation matrix is multiplied with SIFT description, realizes dimensionality reduction.
2. the method according to claim 1, wherein the pixel of background classes and object class in the statistical picture
Number, specifically: the grey level histogram of reflection intensity profile and Frequency of gray levels information is obtained according to image;It will using given threshold value
The all pixels of image are divided into two classes, and one kind greater than given threshold value is known as object class, and one kind less than given threshold value is known as carrying on the back
Scape class.
3. the method according to claim 1, wherein the computing object class pixel grey scale intermediate value and background classes picture
Plain gray scale intermediate value, specifically:
It is counted the minimum gray value of pixel since the minimum gray value and background classes of pixel in object class respectively, the picture of statistics
The gray value of pixel when plain number reaches the half of sum of all pixels in corresponding class is corresponding class pixel grey scale intermediate value.
4. the method according to claim 1, wherein the object class passes through following object at a distance from background classes
The distance function of class and background classes calculates:
In formula, J is the distance function of object class and background classes;The gray value of all pixels deviates object class pixel ash in object class
Spend the sum of degree of intermediate value extraction of square rootIndicate the gray value of all pixels in object class into object class pixel grey scale
The distance of value;The gray value of all pixels deviates the sum of degree of background classes pixel grey scale intermediate value extraction of square root in background classesIndicate the gray value of all pixels in background classes to the distance of background classes pixel grey scale intermediate value.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810004825.9A CN108038476B (en) | 2018-01-03 | 2018-01-03 | A kind of facial expression recognition feature extracting method based on edge detection and SIFT |
PCT/CN2018/087568 WO2019134327A1 (en) | 2018-01-03 | 2018-05-18 | Facial expression recognition feature extraction method employing edge detection and sift |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810004825.9A CN108038476B (en) | 2018-01-03 | 2018-01-03 | A kind of facial expression recognition feature extracting method based on edge detection and SIFT |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108038476A CN108038476A (en) | 2018-05-15 |
CN108038476B true CN108038476B (en) | 2019-10-11 |
Family
ID=62098678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810004825.9A Active CN108038476B (en) | 2018-01-03 | 2018-01-03 | A kind of facial expression recognition feature extracting method based on edge detection and SIFT |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108038476B (en) |
WO (1) | WO2019134327A1 (en) |
Families Citing this family (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108038476B (en) * | 2018-01-03 | 2019-10-11 | 东北大学 | A kind of facial expression recognition feature extracting method based on edge detection and SIFT |
CN109431511B (en) * | 2018-11-14 | 2021-09-24 | 南京航空航天大学 | Human back scoliosis spine contour characteristic curve fitting method based on digital image processing |
CN111009001A (en) * | 2019-09-17 | 2020-04-14 | 哈工大机器人(中山)无人装备与人工智能研究院 | Image registration method, device, equipment and storage medium |
CN110765993B (en) * | 2019-09-25 | 2023-09-12 | 上海众壹云计算科技有限公司 | SEM graph measuring method based on AI algorithm |
CN110675388B (en) * | 2019-09-27 | 2024-02-02 | 沈阳派得林科技有限责任公司 | Weld joint image similarity comparison method |
CN110689485B (en) * | 2019-10-14 | 2022-11-04 | 中国空气动力研究与发展中心超高速空气动力研究所 | SIFT image splicing method applied to infrared nondestructive testing of large pressure container |
CN111091133B (en) * | 2019-11-05 | 2023-05-30 | 西安建筑科技大学 | Bronze ware gold image recognition method based on sift algorithm |
CN111027572B (en) * | 2019-12-02 | 2023-08-22 | 湖南警察学院 | Single image algorithm based on SIFT algorithm |
CN110956640B (en) * | 2019-12-04 | 2023-05-05 | 国网上海市电力公司 | Heterogeneous image edge point detection and registration method |
CN111414917B (en) * | 2020-03-18 | 2023-05-12 | 民生科技有限责任公司 | Identification method of low-pixel-density text |
CN111709428B (en) * | 2020-05-29 | 2023-09-15 | 北京百度网讯科技有限公司 | Method and device for identifying positions of key points in image, electronic equipment and medium |
CN111666925B (en) * | 2020-07-02 | 2023-10-17 | 北京爱笔科技有限公司 | Training method and device for face recognition model |
CN111915582B (en) * | 2020-07-28 | 2024-03-08 | 南京工业大学浦江学院 | Image tampering detection method based on brightness characteristic coupling information quantity constraint |
CN111830988A (en) * | 2020-07-29 | 2020-10-27 | 苏州科瓴精密机械科技有限公司 | Automatic walking equipment, control method and system thereof and readable storage medium |
CN112001954B (en) * | 2020-08-20 | 2023-09-15 | 大连海事大学 | Underwater PCA-SIFT image matching method based on polar curve constraint |
CN112017223B (en) * | 2020-09-11 | 2024-01-30 | 西安电子科技大学 | Heterologous image registration method based on improved SIFT-Delaunay |
CN112288734A (en) * | 2020-11-06 | 2021-01-29 | 西安工程大学 | Printed fabric surface defect detection method based on image processing |
CN112418085B (en) * | 2020-11-23 | 2022-11-18 | 同济大学 | Facial expression recognition method under partial shielding working condition |
CN112508947A (en) * | 2020-12-29 | 2021-03-16 | 苏州光格科技股份有限公司 | Cable tunnel abnormity detection method |
CN113155293B (en) * | 2021-04-06 | 2022-08-12 | 内蒙古工业大学 | Human body remote sensing temperature measurement monitoring and recognition system based on unmanned aerial vehicle |
CN113421248B (en) * | 2021-06-30 | 2024-02-09 | 上海申瑞继保电气有限公司 | Substation equipment rotating image numerical value processing method |
CN113777033B (en) * | 2021-08-18 | 2024-08-02 | 长沙长泰机器人有限公司 | Raw strip defect detection method and device based on machine vision |
CN114359998B (en) * | 2021-12-06 | 2024-03-15 | 江苏理工学院 | Identification method of face mask in wearing state |
CN114359591B (en) * | 2021-12-13 | 2024-07-23 | 重庆邮电大学 | Self-adaptive image matching algorithm fusing edge features |
CN114783014B (en) * | 2022-02-25 | 2024-04-05 | 上海应用技术大学 | Threshold segmentation method for removing occlusion face background |
CN114913345B (en) * | 2022-05-06 | 2024-08-09 | 湖北文理学院 | Simplified image feature extraction method of SIFT algorithm based on FPGA |
CN114972240B (en) * | 2022-05-20 | 2024-09-13 | 陕西师范大学 | Automatic detection and quantification method for digital pathological image missing tissue |
CN114926508B (en) * | 2022-07-21 | 2022-11-25 | 深圳市海清视讯科技有限公司 | Visual field boundary determining method, device, equipment and storage medium |
CN115131355B (en) * | 2022-08-31 | 2022-11-18 | 南通鑫阳雨具有限公司 | Intelligent method for detecting waterproof cloth abnormity by using electronic equipment data |
CN116109915B (en) * | 2023-04-17 | 2023-07-18 | 济宁能源发展集团有限公司 | Intelligent recognition method for container door state |
CN116647335B (en) * | 2023-05-26 | 2024-08-09 | 中国大唐集团财务有限公司 | Method and device for generating private key through scene based on discrete cosine transform |
CN117037272B (en) * | 2023-08-08 | 2024-03-19 | 深圳市震有智联科技有限公司 | Method and system for monitoring fall of old people |
CN117876361B (en) * | 2024-03-11 | 2024-05-10 | 烟台海上航天科技有限公司 | Image processing method and system for high-risk operation of gas pipeline |
CN118334455B (en) * | 2024-06-11 | 2024-08-23 | 青岛图研科技有限公司 | Liquefied petroleum gas safety distribution supervision method based on artificial intelligence |
CN118644478B (en) * | 2024-08-13 | 2024-10-25 | 耐氟隆集团有限公司 | Visual auxiliary identification method and system for sealing surface defects of fluorine-lined ball valve |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8885893B1 (en) * | 2011-11-22 | 2014-11-11 | The United States Of America As Represented By The Secretary Of The Navy | System and method for adaptive face recognition |
US20140369554A1 (en) * | 2013-06-18 | 2014-12-18 | Nvidia Corporation | Face beautification system and method of use thereof |
CN103632149A (en) * | 2013-12-17 | 2014-03-12 | 上海电机学院 | Face recognition method based on image feature analysis |
CN104077597B (en) * | 2014-06-25 | 2017-09-05 | 小米科技有限责任公司 | Image classification method and device |
CN104881639B (en) * | 2015-05-14 | 2018-06-26 | 江苏大学 | A kind of Face datection based on level TDP models, segmentation and expression recognition method |
CN105550657B (en) * | 2015-12-23 | 2019-01-29 | 北京化工大学 | Improvement SIFT face feature extraction method based on key point |
CN108038476B (en) * | 2018-01-03 | 2019-10-11 | 东北大学 | A kind of facial expression recognition feature extracting method based on edge detection and SIFT |
-
2018
- 2018-01-03 CN CN201810004825.9A patent/CN108038476B/en active Active
- 2018-05-18 WO PCT/CN2018/087568 patent/WO2019134327A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2019134327A1 (en) | 2019-07-11 |
CN108038476A (en) | 2018-05-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108038476B (en) | A kind of facial expression recognition feature extracting method based on edge detection and SIFT | |
CN105518709B (en) | The method, system and computer program product of face for identification | |
CN105825183B (en) | Facial expression recognizing method based on partial occlusion image | |
CN110309781B (en) | House damage remote sensing identification method based on multi-scale spectrum texture self-adaptive fusion | |
CN104636721B (en) | A kind of palm grain identification method based on profile Yu Edge texture Fusion Features | |
CN105893946A (en) | Front face image detection method | |
CN108122008A (en) | SAR image recognition methods based on rarefaction representation and multiple features decision level fusion | |
CN104680545B (en) | There is the detection method of well-marked target in optical imagery | |
CN108108760A (en) | A kind of fast human face recognition | |
CN103679187A (en) | Image identifying method and system | |
CN109325507A (en) | A kind of image classification algorithms and system of combination super-pixel significant characteristics and HOG feature | |
CN111259756A (en) | Pedestrian re-identification method based on local high-frequency features and mixed metric learning | |
CN101950448B (en) | Detection method and system for masquerade and peep behaviors before ATM (Automatic Teller Machine) | |
CN112464730A (en) | Pedestrian re-identification method based on domain-independent foreground feature learning | |
CN108734200A (en) | Human body target visible detection method and device based on BING features | |
CN112183504B (en) | Video registration method and device based on non-contact palm vein image | |
CN105069459B (en) | One kind is directed to High Resolution SAR Images type of ground objects extracting method | |
CN110021019A (en) | A kind of thickness distributional analysis method of the AI auxiliary hair of AGA clinical image | |
Surabhi et al. | Background removal using k-means clustering as a preprocessing technique for DWT based Face Recognition | |
CN114782715B (en) | Vein recognition method based on statistical information | |
CN110222660B (en) | Signature authentication method and system based on dynamic and static feature fusion | |
CN112784722B (en) | Behavior identification method based on YOLOv3 and bag-of-words model | |
CN109815784A (en) | A kind of intelligent method for classifying based on thermal infrared imager, system and storage medium | |
CN105844299B (en) | A kind of image classification method based on bag of words | |
CN106943116A (en) | A kind of infant eyesight automatic testing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230419 Address after: Room I, 30/F, Feizhou International, No. 899, Lingling Road, Xuhui District, Shanghai, 200030 Patentee after: Fantasy Technology (Shanghai) Co.,Ltd. Address before: No. 195, Chuangxin Road, Hunnan District, Shenyang City, Liaoning Province Patentee before: Northeastern University |