CN103177264B - The image classification method that view-based access control model dictionary Global Topological is expressed - Google Patents

The image classification method that view-based access control model dictionary Global Topological is expressed Download PDF

Info

Publication number
CN103177264B
CN103177264B CN201310081556.3A CN201310081556A CN103177264B CN 103177264 B CN103177264 B CN 103177264B CN 201310081556 A CN201310081556 A CN 201310081556A CN 103177264 B CN103177264 B CN 103177264B
Authority
CN
China
Prior art keywords
visual
image
word
sift
words
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310081556.3A
Other languages
Chinese (zh)
Other versions
CN103177264A (en
Inventor
黄凯奇
谭铁牛
王冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201310081556.3A priority Critical patent/CN103177264B/en
Publication of CN103177264A publication Critical patent/CN103177264A/en
Application granted granted Critical
Publication of CN103177264B publication Critical patent/CN103177264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses the image classification method that a kind of view-based access control model dictionary Global Topological is expressed, including training and two processes of identification, specifically include step: the target image having marked classification is carried out feature extraction, the feature extracted is carried out Global Topological coding on visual dictionary, coding result is trained and models;The image of unknown classification is carried out feature extraction, the feature extracted is carried out Global Topological coding on visual dictionary, be input to train the model obtained by coding result, it is thus achieved that the classification of target image.Expressing the manifold for image due to Global Topological to express and have invariance, therefore the present invention uses the Global Topological of view-based access control model dictionary to express the precision improving image recognition, and this technology has great importance for the understanding of dynamic image.The present invention is expressed by the Global Topological of study visual dictionary, can identify the classification of image accurately, and this technology can be widely applied to safety verification, the field such as web search and digital entertainment.

Description

Image classification method based on visual dictionary global topology expression
Technical Field
The invention relates to the field of pattern recognition and the field of computer vision, in particular to an image classification method based on visual dictionary global topological expression.
Background
In computer vision, image classification is a fundamental research problem. In the past decades, researchers have attempted to study classification problems based on the physiological basis of human cognition. As an important basis, researchers believe that a basic element of image classification is the potential manifold representation of images. This manifold expression comes from neuron coding, and researchers propose to exploit the topological relationship between neurons to obtain some topological invariance, which plays an important role in the manifold expression of images. Thus, visual topology is a bridge connecting neuronal coding and manifold expression, and this relational mechanism has been widely applied to feature coding and manifold expression in image classification. A great deal of research in recent years shows that local features can be better reconstructed by describing visual topology, so that the manifold expression of an image is smoother, and the aim of improving the classification precision of the image is fulfilled. Therefore, it is necessary to study visual topology in image classification.
In recent years, more and more researchers have considered topological structures in image classification, especially in the local feature coding part. The topology used for feature coding was originally derived from topology-expressing neural networks, and researchers in the networks utilized topological relationships between neurons to more accurately code the neurons. Inspired by this topological representation, many researchers have begun working on encoding local features using topological relationships of visual words in visual dictionary models. At present, the topological structure mainly considers the topological relation of two visual words, such as the fusion of two visual words with the closest distance, the distance and angle relation of the two visual words, and the like, and the relation of every two topological structures achieves certain effect.
However, one limitation of this two-by-two topology, according to the theory of manifold expression, is that it does not well characterize the manifold invariance of the feature space. Therefore, it is necessary to consider a high-order topology that can make the manifold representation smoother, such as a global topology between multiple visual words. Another limitation is that considering this high order topology is challenging because combinations of visual words grow exponentially as the number of visual words increases. Therefore, in the invention, the image classification method based on the visual dictionary global topology expression is provided to overcome the two limitations so as to achieve a better image recognition effect.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an image classification method based on visual dictionary global topological expression.
The invention provides an image classification method based on visual dictionary global topology expression, which comprises the following steps:
step 1, collecting a plurality of training images, respectively carrying out local sampling on the plurality of training images, and extracting Scale Invariant Feature Transform (SIFT) features from obtained local sampling blocks so as to obtain an SIFT feature set of the training images;
step 2, clustering the obtained SIFT feature set to generate a plurality of clustersCenter, and forming visual dictionary with the clustering center as visual wordsWhere C denotes a visual dictionary consisting of M visual words C in D dimensionsiThe components of the composition are as follows,representing a subspace formed by M points in the D-dimensional space;
step 3, carrying out global topological coding on each SIFT feature of each training image;
step 4, all SIFT characteristics of each training imageGlobal topology code V of1,V2,…,VNPerforming maximum aggregation operation to generate an image expression F of the training image;
step 5, sending the image expressions of all training images into a classifier for training to generate a training model;
step 6, similar to the step 1, carrying out local sampling on each image to be recognized, and extracting scale-invariant feature transform (SIFT) features from the obtained local sampling blocks to obtain an SIFT feature set of each image to be recognized;
step 7, based on the visual dictionary which is obtained in the step 2 and consists of visual words, carrying out global topological coding on each SIFT feature of each image to be recognized by utilizing the step 3;
step 8, similar to the step 4, performing maximum aggregation operation on the global topological codes of all SIFT features of each image to be identified to generate an image expression of the image to be identified;
and 9, sending the image expression of the image to be recognized obtained in the step 8 into the training model generated in the step 5 for testing, so as to obtain the recognition result of the target category in the image to be recognized.
According to the method, the topological structure of the target image can still be robustly acquired under the complex condition of the target image, so that the robust identification of the image is carried out. In an intelligent visual monitoring system, the technology can be used for identifying the category of a target in a monitoring system scene, so that the monitoring system can well identify the current behavior of a target object.
Drawings
FIG. 1 is a flowchart of an image classification method based on a visual dictionary global topology expression according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings.
Fig. 1 is a flowchart of an image classification method based on a global topological expression of a visual dictionary according to the present invention, and as shown in fig. 1, the image classification method based on a global topological expression of a visual dictionary according to the present invention includes the following steps:
step 1, collecting a plurality of training images, respectively carrying out local sampling on the plurality of training images, wherein the size of pixels of local sampling blocks can be 16 × 16, and extracting Scale Invariant Feature Transform (SIFT) features from the obtained local sampling blocks so as to obtain an SIFT feature set of the training images;
step 2, clustering the obtained SIFT feature set to generate a plurality of clustering centers, and forming a visual dictionary by taking the clustering centers as visual wordsWhere C denotes a visual dictionary consisting of M visual words C in D dimensionsiThe components of the composition are as follows,representing a subspace formed by M points in the D-dimensional space;
in this step, the clustering may use a clustering algorithm commonly used in the art, such as a k-means clustering algorithm.
Step 3, carrying out global topological coding on each SIFT feature of each training image;
the step 3 further comprises three substeps:
step 3.1, calculating the correlation among the visual words in the visual dictionary;
in one embodiment of the present invention, the correlation between visual words is obtained by using a distance and angle based self-increment algorithm, and before introducing the distance and angle based self-increment algorithm, several parameters are defined:
in-vision dictionaryFor visual word ciIf c is ajAnd ciAre correlated, they are defined to form a cone in the feature space:
c i &RightArrow; c j = { y | | | y - c i | | 2 2 &le; &mu; i , &Delta; ( c i , y , c j ) < &theta; } - - - ( 1 )
wherein, ci→cjDenotes ciAnd cjA cone in the feature space formed between two visual words, y is any one feature point in the cone, { y } represents this cone,denotes y and ciEuclidean distance between, muiIs ciC after clustering with k-meansiThe maximum Euclidean distance between the characteristic points of (a), theta is the angle of the control cone; delta (c)i,y,cj) Is the vector y-ciSum vector cj-ciThe angle between, defined as follows:
&Delta; ( c i , y , c j ) = arccos &lang; y - c i , c j - c i &rang; | | y - c i | | 2 &CenterDot; | | c j - c i | | 2 - - - ( 2 )
wherein,<y-ci,cj-ci>representing a vector y-ciSum vector cj-ciInner product between, "· represents a dot product between two vectors, | | · | luminance |2Representing the two-norm of the vector.
Then, according to the angle between the cone defined by formula (1) and the vector defined by formula (2), the distance and angle based auto-increment algorithm can obtain the correlation between the visual words, and the basic idea of the distance and angle based auto-increment algorithm is that each related visual word occupies an independent cone area. Specifically, the distance and angle based auto-increment algorithm comprises:
first, for a certain visual word ciAll visual words according to it to ciIs sorted from near to far, wherein the closest visual word is taken as the initial relevant visual word ci1
Second, remove the visual word ciOther visual words are processed one by one according to the distance and angle criteriaChecking if a certain visual word cjUpon examination both the distance and angle criteria are met, the visual word is labeled as related visual word cijAnd added to the set SiFinally, the visual word c is obtainediOf related visual words Si
Wherein the distance criterion is defined as:
||ci-cij||2<τ||ci-ci1||2(3)
wherein τ is used to control ciEuclidean distance to other visual words, i.e. distance ciVisual words that are too far away are not considered;
the angle criterion is defined as:
Δ(ci,cij,ck)>θ, &ForAll; { c k } &Element; S i - - - ( 4 )
wherein S isiIs a visual word ciC, and ckRepresenting that has been added to set SiThe related visual words in (1).
Finally, all the visual words are processed in a traversal mode, and a relevant word set S of the visual dictionary C can be obtained:
where D represents the dimension of each visual word, SiIs a visual word ciOf related words, QiIs SiNumber of visual words in (1).
Step 3.2, establishing a global topology model for the nearest K visual words of each SIFT feature by utilizing the correlation among the visual words obtained in the step 3.1;
in this step, it is assumed that the SIFT feature set extracted for each training image is set asWherein D is the feature dimension and N is the feature number. Then for each feature xjThe global topology model is defined as:
Δ(C,xj,S)=[Δ(c1,xj,S1),...,Δ(cM,xj,SM)](7)
&ForAll; &Delta; ( c i , x j , S i ) = [ &Delta; ( c i , x j , c i 1 ) , . . . , &Delta; ( c i , x j , c iQ i ) ] &Element; R 1 &times; Q i - - - ( 8 )
wherein, Δ (c)i,xj,Si) Is a vector xj-ciAnd set SiEach of the related visual words cijAnd ciFormed vector cij-ciThe angle therebetween:
&Delta; ( c i , x j , S i ) = [ &Delta; ( c i , x j , c i 1 ) , . . . , &Delta; ( c i , x j , c iQ i ) ] ,
&Delta; ( c i , x j , c ij ) = arccos &lang; x j - x i , c ij - c i &rang; | | x j - c i | | 2 &CenterDot; | | c ij - c i | | 2 , <xj-ci,cij-ci>representing a vector xj-ciSum vector cij-ciInner product between, "· represents a dot product between two vectors, | | · | luminance |2Representing the two-norm of the vector.
And 3.3, performing global topology coding on each SIFT feature by using the global topology model obtained in the step 3.2.
In this step, each SIFT feature x is determined by the following equationiCarrying out global topological coding:
arg min V i | | x i - SV i T | | 2 2 + [ &lambda;T ( V i ) + &alpha; | | V i &CenterDot; &Delta; ( C , x i , S ) | | 2 2 ] - - - ( 9 )
where λ is the penalty term coefficient, T (V)i) Is to Viα is a penalty term of the global topology model, ". represents a point product between two vectors, ViIs xiThe coding on S is defined as follows:
wherein (v)k1)iIs the local feature xiIn the visual word ckFirst related visual word c ofk1The above response, and so on.
Then, the formula (9) is optimized and solved, and the SIFT feature x can be obtainediGlobal topology code V ofi
Step 4, all SIFT characteristics of each training imageGlobal topology code V of1,V2,...,VNPerforming maximum aggregation operation to generate an image expression F of the training image;
in this step, a maximum clustering operation is performed according to the following formula to generate an image expression F of the training image:
F=maxcolumn[V1 T,V2 T,...,VN T]T(12)
therein, maxcolumnMeaning that only the maximum value remains for each column of the matrix.
Step 5, sending the image expressions of all training images into a classifier for training to generate a training model;
in this step, the classifier may use a classifier commonly used in the art, such as a support vector machine.
Step 6, similar to the step 1, carrying out local sampling on each image to be recognized, and extracting scale-invariant feature transform (SIFT) features from the obtained local sampling blocks to obtain an SIFT feature set of each image to be recognized;
step 7, based on the visual dictionary composed of visual words obtained in the step 2, performing global topological coding on each SIFT feature of each image to be recognized by using the step 3 (such as formula (7) -formula (11));
step 8, similar to the step 4, performing maximum aggregation operation on the global topological codes of all SIFT features of each image to be identified by using the image expression defined by the formula (12) to generate the image expression of the image to be identified;
and 9, sending the image expression of the image to be recognized obtained in the step 8 into the training model generated in the step 5 for testing, so as to obtain the recognition result of the target category in the image to be recognized.
From the above, the image classification method based on the global topological expression of the visual dictionary provided by the invention comprises two processes of training and recognition, and then the implementation of the method is described by taking a vehicle detection system in a certain monitoring scene as an example. The vehicle detection system can judge whether a monitored scene contains a vehicle.
A large number of vehicle images (1000) and non-vehicle images (1000) are collected and used to train a vehicle recognition model. The training step S1 is:
step S11: SIFT feature extraction is carried out on 1000 vehicle images (positive samples) and 1000 non-vehicle images (negative samples), 2000 groups of SIFT features are generated, calculation is carried out by averaging that each group contains 2000 SIFT features, and 4000000(2000 multiplied by 2000) SIFT features are extracted in total;
step S12: carrying out k-menas clustering operation on 4000000 SIFT features to generate 1 visual dictionary containing 1000 visual words;
step S13: in an actual application scene, 3 nearest neighbor visual words of each SIFT local feature are taken to carry out global topological coding, and the coding length is 3000 dimensions;
step S14: carrying out maximum aggregation operation on the global topological codes of 2000 SIFT local features of each training image to obtain image expression of the image, wherein the length of the image expression is 3000 dimensions;
step S15: sending the image expressions of all training images into a classifier for training to obtain a training model;
in the identification stage, a camera signal is accessed to a computer through an acquisition card to acquire a test picture, and the specific identification step S2 is as follows:
step S21: inputting a test image, and carrying out SIFT feature extraction operation on the test image to generate 1 group of SIFT features containing 2000 SIFT features.
Step S22: in an actual application scene, 3 nearest neighbor visual words of each SIFT local feature are taken to carry out global topological coding, and the coding length is 3000 dimensions;
step S23: carrying out maximum aggregation operation on the global topological codes of 2000 SIFT local features of each test image to obtain image expression of the image, wherein the length of the image expression is 3000 dimensions;
step S24: and sending the image expression of the image to be recognized obtained in the step S23 to the training model generated in the step S15 for testing, so as to obtain the recognition result of the target class in the image to be recognized.
In summary, the invention provides an effective image classification technology based on visual dictionary global topology expression. The invention is easy to realize, has stable performance, can improve the comprehension capability of the intelligent monitoring system to the monitored scene, and is a key technology in the next generation of intelligent video monitoring system.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. An image classification method based on visual dictionary global topology expression is characterized by comprising the following steps:
step 1, collecting a plurality of training images, respectively carrying out local sampling on the plurality of training images, and extracting Scale Invariant Feature Transform (SIFT) features from obtained local sampling blocks so as to obtain an SIFT feature set of the training images;
step 2, clustering the obtained SIFT feature set to generate a plurality of clustering centers, and forming a visual dictionary by taking the clustering centers as visual wordsWhere C denotes a visual dictionary consisting of M visual words C in D dimensionsiThe components of the composition are as follows,representing a subspace formed by M points in the D-dimensional space;
step 3, carrying out global topological coding on each SIFT feature of each training image;
step 4, all SIFT characteristics of each training imageGlobal topology code V of1,V2,...,VNPerforming maximum aggregation operation to generate an image expression F of the training image;
step 5, sending the image expressions of all training images into a classifier for training to generate a training model;
step 6, similar to the step 1, carrying out local sampling on each image to be recognized, and extracting scale-invariant feature transform (SIFT) features from the obtained local sampling blocks to obtain an SIFT feature set of each image to be recognized;
step 7, based on the visual dictionary which is obtained in the step 2 and consists of visual words, carrying out global topological coding on each SIFT feature of each image to be recognized by utilizing the step 3;
step 8, similar to the step 4, performing maximum aggregation operation on the global topological codes of all SIFT features of each image to be identified to generate an image expression of the image to be identified;
step 9, sending the image expression of the image to be recognized obtained in the step 8 into the training model generated in the step 5 for testing, so as to obtain a recognition result of the target category in the image to be recognized;
the step 3 further comprises three substeps:
step 3.1, calculating the correlation among the visual words in the visual dictionary;
step 3.2, establishing a global topology model for the nearest K visual words of each SIFT feature by utilizing the correlation among the visual words obtained in the step 3.1;
step 3.3, carrying out global topological coding on each SIFT feature by using the global topological model obtained in the step 3.2;
in the step 3.2, it is assumed that the SIFT feature set extracted from each training image isWhere D is the feature dimension and N is the number of features, then for each feature xjThe global topology model is expressed as:
Δ(C,xj,S)=[Δ(c1,xj,S1),...,Δ(cM,xj,SM)],
&ForAll; &Delta; ( c i , x j , S i ) = &lsqb; &Delta; ( c i , x j , c i 1 ) , ... , &Delta; ( c i , x j , c iQ i ) &rsqb; &Element; R 1 &times; Q i ,
wherein, Δ (c)i,xj,Si) Is a vector xj-ciAnd set SiEach related word c inijAnd ciFormed vector cij-ciThe angle therebetween:
&Delta; ( c i , x j , S i ) = [ &Delta; ( c i , x j , c i 1 ) , . . . , &Delta; ( c i , x j , c iQ i ) ] ,
<xj-ci,cij-ci>representing a vector xj-ciSum vector cij-ciInner product between, "· represents a dot product between two vectors, | | · | luminance |2Representing the two-norm of the vector.
2. The method according to claim 1, wherein in the step 2, a k-means clustering algorithm is adopted for clustering.
3. The method of claim 1, wherein the correlations between visual words are derived using a distance and angle based auto-incremental algorithm.
4. The method of claim 3, wherein the distance and angle based auto-increment algorithm comprises:
first, for a certain visual word ciAll visual words according to it to ciIs sorted from near to far, wherein the closest visual word is taken as the initial relevant visual word ci1
Second, remove the visual word ciChecking other visual words one by one according to the distance and angle criteria, if a certain visual word cjUpon examination both the distance and angle criteria are met, the visual word is labeled as related visual word cijAnd added to the set SiFinally, the visual word c is obtainediOf related visual words Si
And finally, traversing all the visual words to obtain a relevant word set S of the visual dictionary C.
5. The method of claim 4, wherein the distance criterion is defined as:
||ci-cij||2<τ||ci-ci1||2,
wherein τ is used to control ciEuclidean distances to other visual words;
the angle criterion is defined as:
Δ(ci,cij,ck)>θ,
wherein S isiIs a visual word ciC, and ckRepresenting that has been added to set SiThe visual word in (1).
6. The method of claim 4, wherein the set of related words S of the visual dictionary C is represented as:
where D represents the dimension of each visual word, SiIs a visual word ciOf related visual words, QiIs SiNumber of visual words in (1).
7. The method of claim 1, wherein in step 3.3, each SIFT feature x is determined using the following formulaiCarrying out global topological coding:
argmin V i | | x i - SV i T | | 2 2 + &lsqb; &lambda; T ( V i ) + &alpha; | | V i &CenterDot; &Delta; ( C , x i , S ) | | 2 2 &rsqb; ,
where λ is the penalty term coefficient, T (V)i) Is to Viα is a penalty term of the global topology model, ". represents a point product between two vectors, ViIs xiGlobal topology coding on S:
wherein (v)k1)iIs each SIFT feature xiIn the visual word ckFirst related visual word c ofk1The above response, and so on.
8. The method according to claim 1, wherein in step 4, the maximum clustering operation is performed according to the following formula to generate an image expression F of the training image:
F=maxcolumn[V1 T,V2 T,...,VN T]T
therein, maxcolumnMeaning that only the maximum value remains for each column of the matrix.
CN201310081556.3A 2013-03-14 2013-03-14 The image classification method that view-based access control model dictionary Global Topological is expressed Active CN103177264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310081556.3A CN103177264B (en) 2013-03-14 2013-03-14 The image classification method that view-based access control model dictionary Global Topological is expressed

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310081556.3A CN103177264B (en) 2013-03-14 2013-03-14 The image classification method that view-based access control model dictionary Global Topological is expressed

Publications (2)

Publication Number Publication Date
CN103177264A CN103177264A (en) 2013-06-26
CN103177264B true CN103177264B (en) 2016-09-14

Family

ID=48637105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310081556.3A Active CN103177264B (en) 2013-03-14 2013-03-14 The image classification method that view-based access control model dictionary Global Topological is expressed

Country Status (1)

Country Link
CN (1) CN103177264B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751198B (en) 2013-12-27 2018-04-27 华为技术有限公司 The recognition methods of object in image and device
CN103984959B (en) * 2014-05-26 2017-07-21 中国科学院自动化研究所 A kind of image classification method based on data and task-driven
CN104464079B (en) * 2014-12-29 2016-10-05 北京邮电大学 Multiple Currencies face amount recognition methods based on template characteristic point and topological structure thereof
CN104598898B (en) * 2015-02-13 2018-02-06 合肥工业大学 A kind of Aerial Images system for rapidly identifying and its method for quickly identifying based on multitask topology learning
CN108319907A (en) * 2018-01-26 2018-07-24 腾讯科技(深圳)有限公司 A kind of vehicle identification method, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101315663A (en) * 2008-06-25 2008-12-03 中国人民解放军国防科学技术大学 Nature scene image classification method based on area dormant semantic characteristic
CN102402621A (en) * 2011-12-27 2012-04-04 浙江大学 Image retrieval method based on image classification
CN102509110A (en) * 2011-10-24 2012-06-20 中国科学院自动化研究所 Method for classifying images by performing pairwise-constraint-based online dictionary reweighting
CN102609732A (en) * 2012-01-31 2012-07-25 中国科学院自动化研究所 Object recognition method based on generalization visual dictionary diagram

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001338294A (en) * 2000-05-24 2001-12-07 Monolith Co Ltd Form analyzer targeting on topology
US8766982B2 (en) * 2010-01-19 2014-07-01 Disney Enterprises, Inc. Vectorization of line drawings using global topology and storing in hybrid form

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101315663A (en) * 2008-06-25 2008-12-03 中国人民解放军国防科学技术大学 Nature scene image classification method based on area dormant semantic characteristic
CN102509110A (en) * 2011-10-24 2012-06-20 中国科学院自动化研究所 Method for classifying images by performing pairwise-constraint-based online dictionary reweighting
CN102402621A (en) * 2011-12-27 2012-04-04 浙江大学 Image retrieval method based on image classification
CN102609732A (en) * 2012-01-31 2012-07-25 中国科学院自动化研究所 Object recognition method based on generalization visual dictionary diagram

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Global topological features of cancer proteins in the human";Pall F. Jonsson .etc;《Bioinformatics》;20060915;第22卷(第18期);全文 *
"基于全局拓扑结构的分级三角剖分图像拼接";曾丹 等;《计算机研究与发展》;20121230;第49卷(第1期);全文 *
C. Bonatti .etc."Topological classification of gradient-like diffeomorphisms on 3-manifolds".《ELSEVIER》.2004,369–391. *

Also Published As

Publication number Publication date
CN103177264A (en) 2013-06-26

Similar Documents

Publication Publication Date Title
Ranjan et al. An all-in-one convolutional neural network for face analysis
CN109063565B (en) Low-resolution face recognition method and device
CN110427867A (en) Human facial expression recognition method and system based on residual error attention mechanism
Chen et al. Fisher vector encoded deep convolutional features for unconstrained face verification
Bristow et al. Why do linear SVMs trained on HOG features perform so well?
CN103177264B (en) The image classification method that view-based access control model dictionary Global Topological is expressed
CN110598603A (en) Face recognition model acquisition method, device, equipment and medium
CN104915643A (en) Deep-learning-based pedestrian re-identification method
CN105868711B (en) Sparse low-rank-based human behavior identification method
CN103020658B (en) Recognition method for objects in two-dimensional images
CN104239856A (en) Face recognition method based on Gabor characteristics and self-adaptive linear regression
CN105160303A (en) Fingerprint identification method based on mixed matching
CN103226713A (en) Multi-view behavior recognition method
CN103198299A (en) Face recognition method based on combination of multi-direction dimensions and Gabor phase projection characteristics
CN115830637B (en) Method for re-identifying blocked pedestrians based on attitude estimation and background suppression
CN113378675A (en) Face recognition method for simultaneous detection and feature extraction
CN105160290A (en) Mobile boundary sampling behavior identification method based on improved dense locus
CN110874576A (en) Pedestrian re-identification method based on canonical correlation analysis fusion features
Li et al. Exploiting inductive bias in transformer for point cloud classification and segmentation
Li et al. Adversarial domain adaptation via category transfer
CN114863572A (en) Myoelectric gesture recognition method of multi-channel heterogeneous sensor
CN102609732A (en) Object recognition method based on generalization visual dictionary diagram
Wang et al. Sparse representation of local spatial-temporal features with dimensionality reduction for motion recognition
Li et al. Action recognition with spatio-temporal augmented descriptor and fusion method
CN101482917B (en) Human face recognition system and method based on second-order two-dimension principal component analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant