CN107423294A - A kind of community image search method and system - Google Patents
A kind of community image search method and system Download PDFInfo
- Publication number
- CN107423294A CN107423294A CN201610102197.9A CN201610102197A CN107423294A CN 107423294 A CN107423294 A CN 107423294A CN 201610102197 A CN201610102197 A CN 201610102197A CN 107423294 A CN107423294 A CN 107423294A
- Authority
- CN
- China
- Prior art keywords
- image
- msub
- mrow
- community
- search method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention belongs to community field of image search, it is proposed a kind of community image search method, including Step 1: image local feature extracts, Step 2: image overall contextual feature is extracted, Step 3: being trained by vision word, local feature visual dictionary and global context characteristic visual dictionary are obtained, Step 4: image characteristic quantization, Step 5: the generation file of falling ranking index, Step 6: image retrieval.It is proposed a kind of community image indexing system, including image local feature extraction module, image overall characteristic extracting module, acquisition visual dictionary module, image characteristic quantization module, ranking index file module, image retrieval module.The accuracy rate and recall rate of image retrieval are improved using the inventive method and system.
Description
Technical field
The present invention relates to internet community image retrieval technologies field, more particularly to a kind of community image search method and it is
System.
Background technology
With the continuous development of Internet technology, especially web2.0 fast development, various types of multimedia messages
Resource sharp increase, the resource required for oneself is fast and effectively found from this boundless and indistinct information ocean, becomes more next
It is more difficult.In recent years, social media website Facebook, Yahoo socialgram are as sharing website Flicker etc., daily
The sizable view data of capacity will be produced, the image information with utilizing these magnanimity how is effectively handled, turns into current
One study hotspot of computer vision field, and among mass image data information, rapidly and accurately retrieve satisfaction and use
The image information that family retrieval is intended to, even more one important research topic of information retrieval field.
Abundant multimedia messages occur, and have greatly promoted the development of extensive information retrieval system.It is relatively more normal at present
Image indexing system is roughly divided into two classes:First, (the Text-based image of the image retrieval based on keyword
Retrieval, TBIR), such as University Of Tianjin's master's thesis, image encrypting algorithm and system based on double keywords.2nd,
CBIR (Content-based image retrieval, CBIR).Image inspection wherein based on keyword
The information that rope marks just with the network user is retrieved, because most of network user has not received special image information
Mark training, while also the culture background by its people, individual factor etc. influence, therefore markup information and image sheet be present in image
The problems such as body degree of correlation is little, and the information correlativity of mark, importance etc., it is impossible to by existing markup information
Sequence is reflected.And CBIR is mainly retouched using the local message of image or the global information of image
State, the local feature distribution of the image of the global information description of image, the Global Information of image can not be reflected.
The content of the invention
The technical problem to be solved in the invention is:For above-mentioned technological deficiency, the present invention proposes a kind of sociogram's picture
Search method, comprise the following steps:
Step 1: image local feature extracts,
Step 2: image overall contextual feature is extracted,
Step 3: being trained by vision word, local feature visual dictionary and global context characteristic visual dictionary are obtained,
Step 4: image characteristic quantization,
Step 5: the generation file of falling ranking index,
Step 6: image retrieval.
Preferably, using SIFT feature extracting method extraction image local feature in the step 1.
In any of the above-described embodiment preferably, the extraction of image overall contextual feature includes in the step 2:
(1) Image Edge-Detection;
(2) global context region is chosen;
(3) global context shape facility describes.
In any of the above-described embodiment preferably, Image Edge-Detection is examined using canny edges in the step (1)
The marginal information of method of determining and calculating detection image.
In any of the above-described embodiment preferably, specific method is chosen such as in global context region in the step (2)
Under:
With the characteristic point X=(x, y) detected in step (1)TOn the basis of, then it is down-sampled after coordinate points beWithThe center of circle, the circle that a radius is r=k* σ is drawn, characteristic point X=(x, y are used as using the circle
)TContext area, wherein σ is characterized dimensional information a little, the size of k Control Radius, while using the direction of characteristic point as base
Standard, k* σ circle is divided into 60 regions, angle direction is divided into 12 equal portions, then the size at each angle isRadial direction point
Into 5 sections, using characteristic point as the center of circle, radius withIncreased.
In any of the above-described embodiment preferably, the description of global context shape facility is in step in the step (3)
Suddenly in each region of (2) division, the number of each edges of regions point is counted.
In any of the above-described embodiment preferably, local feature quantizing process in image characteristic quantization in the step 4
For:
s.t.Wli∈Wl
Wherein, LocaljIt is j-th of local feature of image.
In any of the above-described embodiment preferably, global characteristics quantizing process in image characteristic quantization in the step 4
For:
s.t.Wgi∈Wg
Wherein, GlobaljIt is j-th of global characteristics of image.
In any of the above-described embodiment preferably, generate in the step 5 and regarded described in the file processes of falling ranking index
Feel that word significance level is as follows:
Wherein N represents the quantity of image, | | LWordListi||0Represent local feature index list, LWordListiComprising
The quantity of entity, | | GWordListi||0Represent global characteristics index list, GWordListiQuantity comprising entity.
In any of the above-described embodiment preferably, image retrieval is to calculate query image and image library in the step 6
Similarity between middle image.
In any of the above-described embodiment preferably, the similarity in the query image and image library between image is:
Score=λ * scoreGlobalji+(1-λ)*scoreLocalji,λ∈(0,1]
Wherein, λ represents the weight of global characteristics, and scoreLocal is local feature similarity, in the scoreGlobal overall situations
Following traits similarity..
The present invention has also been proposed a kind of community image indexing system, including image local feature extraction module, its feature exist
In:Also include image overall characteristic extracting module, obtain visual dictionary module, image characteristic quantization module, ranking index file
Module, image retrieval module;Described image local shape factor module is used for the local feature for extracting image, and described image is global
Characteristic extracting module is used to extract image overall contextual feature, and the acquisition visual dictionary module is trained by vision word,
Local feature visual dictionary and global context characteristic visual dictionary are obtained, described image characteristic quantification module is for finding and often
The most similar vision word of one local feature, global context feature, the ranking index file module are used to describe image
Information, described image retrieval module are used to calculate the similarity in query image and image library between image.
In any of the above-described embodiment preferably, described image local shape factor module uses SIFT feature extraction side
Method extracts image local feature.
In any of the above-described embodiment preferably, described image Global characteristics extraction module by Image Edge-Detection,
Global context region is chosen, the description of global context shape facility obtains image overall contextual feature.
In any of the above-described embodiment preferably, in described image retrieval module in query image and image library image it
Between similarity be:
Score=λ * scoreGlobalji+(1-λ)*scoreLocalji,λ∈(0,1]
Wherein, λ represents the weight of global characteristics, and scoreLocal is local feature similarity, in the scoreGlobal overall situations
Following traits similarity.
With existing image search method and compared with, the present invention there is following clear superiority:
(1) relative to traditional image search method, the image retrieval of view-based access control model content, the language included in itself from image
Adopted information is set out, and more can accurately describe image;
(2) present invention has merged various features, not only allows for the local message of image, and considers the overall situation of image
Context shape information.The global characteristics of image can just make up local feature can not reflect image Global Information this lack
Fall into, so the present invention improves the accuracy rate and recall rate of image retrieval.
(3) characteristics of image is described using multiple vision bag of words, and the document method of falling ranking index, have no effect on
The retrieval rate of image.
Brief description of the drawings
Fig. 1 is the flow chart shown according to community image search method of the present invention in embodiment.
Fig. 2 is according to the image local feature point schematic diagram extracted in community image search method of the present invention.
Fig. 3 is according to image overall contextual information schematic diagram in community image search method of the present invention.
Fig. 4 is according to the file structure of falling ranking index figure in community image search method of the present invention.
Embodiment
The present invention is described in further detail below in conjunction with the accompanying drawings, it is necessary to it is pointed out here that, implement in detail below
Mode is served only for that the present invention is further detailed, it is impossible to limiting the scope of the invention is interpreted as, the field
Technical staff can make some nonessential modifications and adaptations according to foregoing invention content to the present invention.
The flow chart of the method for the invention is as shown in figure 1, comprise the following steps:
Step 1, image local feature extracts.
Using existing SIFT (Scale Invariant Feature Transform) feature extracting method, pass through structure
Metric space is built, detects extreme point, obtains scale invariability, characteristic point filters and is accurately positioned to obtain the accurate of extreme point
Coordinate and yardstick, a distribution principal direction is characterized, the Feature Descriptor of the dimension of generation 128, obtains the local feature of image.
Assuming that the N images in image library are represented byFor every piece image Ii, utilize SIFT extraction algorithms
Obtain the local message of imagediRepresent the number of the i-th width characteristics of image, characteristic vector LocaljIncluding characteristic point
Coordinate, yardstick, angle, Feature Descriptor information, altogether 132 dimension information, as shown in Figure 2.
Step 2, image overall contextual feature is extracted.
Step 2.1, Image Edge-Detection
Using the marginal information of canny edge detection algorithm detection images, while in order to improve follow-up global context shape
The efficiency of shape feature, obtained edge image is downsampled to original factor times, factor typically can valueAlso may be used
The down-sampled factor is determined with the size of visible image.
Step 2.2, global context region is chosen
With the characteristic point X=(x, y) detected in step 1TOn the basis of, then it is down-sampled after coordinate points be withThe center of circle, the circle that a radius is r=k* σ is drawn, characteristic point X=(x, y) is used as using this circleTContext area
Domain, wherein σ are characterized dimensional information a little, the size of k Control Radius.Simultaneously on the basis of the direction of characteristic point, by k* σ circle
It is divided into 60 regions.As shown in figure 3, angle direction is divided into 12 equal portions, then the size at each angle isRadial direction is divided into
5 sections, using characteristic point as the center of circle, radius withIncreased.I.e. with radial direction, first region
Radius span isThe radius span of Two Areas isBy that analogy, last region
Radius span be
Step 2.3, global context shape facility describes
In each region that step 2.2 divides, the number of each edges of regions point is counted, as in the overall situation of characteristic point
Hereafter shape facility.Assuming that the coordinate of some marginal point can be expressed as X '=(x ', y ')T, then the index value of angle can lead to
Below equation is crossed to be calculated:
Wherein θ is characterized angle information a little.
The index value of radial direction is:
Wherein r represents the radius size selected in step 2.2.
The number of each edges of regions point is counted, obtains histogram Hangle,d, and be normalizedDimension
Spend for 60.Feature H after normalization is characteristic point X=(x, y)TCorresponding global context feature, hence for every
One local featureThe corresponding global context feature that can be extracted
Step 3, trained by vision word, obtain local feature visual dictionary and global context characteristic visual dictionary.
Local feature and global context shape information for said extracted, are carried out using two visual dictionaries to it
Description.In order to improve the versatility of vision word, on the local feature and the overall situation that are extracted from the Flickr60k of independent image storehouse
Hereafter shape information is trained.For the local feature and global context feature of said extracted, two vision bag of words are utilized
Model is described.Training to vision word can utilize k-means clustering methods, but traditional k-means gathers
The increase that class method is counted out with feature, the time of consuming is also increasingly longer, therefore, using FLANN algorithms to vision word
It is trained, obtains two visual dictionaries, respectively local feature visual dictionary Wl={ Wl1,Wl2,Wl3,......,
Wlk_local, with global context characteristic visual dictionary Wg={ Wg1,Wg2,Wg3,......,Wgk_global, wherein k_local
The number of local feature dictionary is represented, k_global represents the quantity of global context characteristic visual word.Each of dictionary
Element represents a visual vocabulary, that is, the cluster centre of characteristic point.
Step 4, image characteristic quantization encodes.
Mainly using two vision words that step 3 obtains to retrieving image library in this stepIn image office
Portion's featureWith global context featureEncoded, that is, found and each local feature
Localj, global context feature GlobaljMost similar vision word Wl*, Wg*.Local feature quantizing process is represented by,
s.t.Wli∈Wl
It can be expressed as global characteristics quantizing process,
s.t.Wgi∈Wg
Using the local feature and global context feature of most similar vision word Expressive Features point, after quantifying
Feature can be described as
Step 5, the file of falling ranking index is generated.
The information for describing to obtain using vision word histogram, it is often more sparse, therefore, ranking index can be utilized
FileImage information is described, wherein M represents the size of visual dictionary, WordListiRepresent
Vision word WordiCorresponding aspect indexing list, had so both saved the speed of successive image retrieval, and avoided Nogata
Figure information describes openness.For each aspect indexing list WordListiIt is expressed as WordListi=imgID, Tf,
othermeta}n, wherein imgID records which piece image this feature point comes from, and Tf represents this characteristic point in imgID images
The number of appearance, othermeta represent other elements information, can be characterized coordinate information a little, hamming code information etc..
Concrete structure information is as shown in Figure 4.
For every piece image in image library, the characteristic point of image is described using the file of falling ranking index, it is right
In two visual dictionary Wg={ Wg1,Wg2,Wg3,......,Wgk_global, Wl={ Wl1,Wl2,Wl3,......,
Wlk_local, two files of falling ranking index are established, are respectively:Basis simultaneously
InvertedFileLocal and the InvertedFileGlobal files of falling ranking index, can obtain Wli、WgiInverse word frequency, is used for
Word Wl is describedi、WgiDescriptive power in the picture:
Wherein N represents the quantity of image, and | | LWordListi||0Represent local feature index list LWordListiBag
Quantity containing entity, | | GWordListi||0Represent global characteristics index list GWordListiQuantity comprising entity.
Step 6, image retrieval.
It is expressed as query imageUsing step 1, step 2 extraction query image local feature information with
Global context shape information, while the local feature after being quantified using step 4 and global context feature, for every
One local feature and global context feature find corresponding aspect indexing list, and by being deposited in aspect indexing list
ImgID information, and Tf information, calculate query image queryjSimilarity between image in image library.
Assuming that a width query image query, can be retouched with the similarity of any one sub-picture I local feature in image library
State for:
Wherein | | h_query | |2、||h_I||2Vision word frequency histogram h_ corresponding respectively image query, I
Query, h_I two norms,The local feature region after some quantization in image query, I is represented respectively.
Similarly global context characteristic similarity score can obtain:
The similarity that final local feature obtains isGlobal context feature obtains
Similarity beThen query image queryjTo final similar of image in image library
Spend and be
Score=λ * scoreGlobalji+(1-λ)*scoreLocalji,λ∈(0,1]
Wherein λ represents the weight of global characteristics, and scoreLocal is local feature similarity, in the scoreGlobal overall situations
Following traits similarity.
A kind of community image indexing system of the present invention, including image local feature extraction module, image overall feature carry
Modulus block, obtain visual dictionary module, image characteristic quantization module, ranking index file module, image retrieval module;The figure
As local feature of the local shape factor module for extracting image, described image Global characteristics extraction module is used to extract image
Global context feature, the acquisition visual dictionary module are trained by vision word, obtain local feature visual dictionary with it is complete
Office's contextual feature visual dictionary, described image characteristic quantification module are used to find and each local feature, global context
The most similar vision word of feature, the ranking index file module are used to describe image information, and described image retrieval module is used
Similarity in calculating query image and image library between image.
Described image local shape factor module is using SIFT feature extracting method extraction image local feature.
Described image Global characteristics extraction module is chosen, above and below the overall situation by Image Edge-Detection, global context region
Literary shape facility description obtains image overall contextual feature.Wherein, canny edge detection algorithms are utilized in Image Edge-Detection
The marginal information of detection image, while in order to improve the efficiency of follow-up global context shape facility, the edge image that will be obtained
Original factor times is downsampled to, factor typically can valueCan also visible image size come determine it is down-sampled because
Son.Characteristic point X=(x, y) in the selection of global context region to detectTOn the basis of, then it is down-sampled after coordinate points be withThe center of circle, the circle that a radius is r=k* σ is drawn, characteristic point X=(x, y) is used as using this circleTContext area
Domain, wherein σ are characterized dimensional information a little, the size of k Control Radius.Simultaneously on the basis of the direction of characteristic point, by k* σ circle
It is divided into 60 regions.As shown in figure 3, angle direction is divided into 12 equal portions, then the size at each angle isRadial direction is divided into
5 sections, using characteristic point as the center of circle, radius withIncreased.I.e. with radial direction, first region
Radius span isThe radius span of Two Areas isBy that analogy, last region
Radius span be
Similarity in described image retrieval module in query image and image library between image is:
Score=λ * scoreGlobalji+(1-λ)*scoreLocalji,λ∈(0,1]
Wherein, λ represents the weight of global characteristics, and scoreLocal is local feature similarity, in the scoreGlobal overall situations
Following traits similarity.
Claims (10)
1. a kind of community image search method, including Step 1: image local feature extracts, it is characterised in that:Also include following
Step:
Step 2: image overall contextual feature is extracted,
Step 3: being trained by vision word, local feature visual dictionary and global context characteristic visual dictionary are obtained,
Step 4: image characteristic quantization,
Step 5: the generation file of falling ranking index,
Step 6: image retrieval.
2. community image search method according to claim 1, it is characterised in that:SIFT feature is used in the step 1
Extracting method extracts image local feature.
3. community image search method according to claim 1, it is characterised in that:In the step 2 above and below image overall
Literary feature extraction includes:
(1) Image Edge-Detection;
(2) global context region is chosen;
(3) global context shape facility describes.
4. community image search method according to claim 3, it is characterised in that:Image border is examined in the step (1)
Survey is the marginal information using canny edge detection algorithm detection images.
5. community image search method according to claim 3, it is characterised in that:Global context in the step (2)
Region, which is chosen, to be included:
With the characteristic point X=(x, y) detected in step (1)TOn the basis of, then it is down-sampled after coordinate points beWithThe center of circle, the circle that a radius is r=k* σ is drawn, characteristic point X=(x, y) is used as using the circleTContext area
Domain, wherein σ are characterized dimensional information a little, the size of k Control Radius, while on the basis of the direction of characteristic point, by k* σ circle
60 regions are divided into, angle direction is divided into 12 equal portions, then the size at each angle isRadial direction is divided into 5 sections, with
Characteristic point is the center of circle, radius withIncreased.
6. community image search method according to claim 3, it is characterised in that:Global context in the step (3)
Shape facility description is that the number of each edges of regions point is counted in each region of step (2) division.
7. community image search method according to claim 1, it is characterised in that:Image characteristic quantization in the step 4
Middle local feature quantizing process is:
<mrow>
<munder>
<mrow>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
<mi>i</mi>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>Local</mi>
<mi>j</mi>
</msub>
<mo>-</mo>
<msub>
<mi>Wl</mi>
<mi>i</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
s.t.Wli∈Wl
Wherein, LocaljIt is j-th of local feature of image.
8. community image search method according to claim 1, it is characterised in that:Image characteristic quantization in the step 4
Middle global characteristics quantizing process is:
<mrow>
<munder>
<mrow>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
<mi>i</mi>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>Global</mi>
<mi>j</mi>
</msub>
<mo>-</mo>
<msub>
<mi>Wg</mi>
<mi>i</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
s.t.Wgi∈Wg
Wherein, GlobaljIt is j-th of global characteristics of image.
9. community image search method according to claim 1, it is characterised in that:Generate sequence rope in the step 5
Vision word significance level is as follows described in quotation part process:
<mrow>
<mi>I</mi>
<mi>D</mi>
<mi>F</mi>
<mo>_</mo>
<msub>
<mi>Wl</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<mi>l</mi>
<mi>o</mi>
<mi>g</mi>
<mrow>
<mo>(</mo>
<mfrac>
<mi>N</mi>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>LWordList</mi>
<mi>i</mi>
</msub>
<mo>|</mo>
<msub>
<mo>|</mo>
<mn>0</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>I</mi>
<mi>D</mi>
<mi>F</mi>
<mo>_</mo>
<msub>
<mi>Wg</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<mi>l</mi>
<mi>o</mi>
<mi>g</mi>
<mrow>
<mo>(</mo>
<mfrac>
<mi>N</mi>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>GWordList</mi>
<mi>i</mi>
</msub>
<mo>|</mo>
<msub>
<mo>|</mo>
<mn>0</mn>
</msub>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
</mrow>
Wherein N represents the quantity of image, | | LWordListi||0Represent local feature index list, LWordListiInclude entity
Quantity, | | GWordListi||0Represent global characteristics index list, GWordListiQuantity comprising entity.
10. community image search method according to claim 1, it is characterised in that:Image retrieval is in the step 6
Calculate the similarity between image in query image and image library.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610102197.9A CN107423294A (en) | 2016-02-25 | 2016-02-25 | A kind of community image search method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610102197.9A CN107423294A (en) | 2016-02-25 | 2016-02-25 | A kind of community image search method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107423294A true CN107423294A (en) | 2017-12-01 |
Family
ID=60422120
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610102197.9A Pending CN107423294A (en) | 2016-02-25 | 2016-02-25 | A kind of community image search method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107423294A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110019871A (en) * | 2017-12-29 | 2019-07-16 | 上海全土豆文化传播有限公司 | Image search method and device |
CN110019910A (en) * | 2017-12-29 | 2019-07-16 | 上海全土豆文化传播有限公司 | Image search method and device |
CN111723240A (en) * | 2019-03-20 | 2020-09-29 | 杭州海康威视数字技术股份有限公司 | Image retrieval method and device and electronic equipment |
CN118364130A (en) * | 2024-06-18 | 2024-07-19 | 安徽省农业科学院农业经济与信息研究所 | Image retrieval method and system based on super dictionary |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101814147A (en) * | 2010-04-12 | 2010-08-25 | 中国科学院自动化研究所 | Method for realizing classification of scene images |
CN102968637A (en) * | 2012-12-20 | 2013-03-13 | 山东科技大学 | Complicated background image and character division method |
CN102982165A (en) * | 2012-12-10 | 2013-03-20 | 南京大学 | Large-scale human face image searching method |
CN103020975A (en) * | 2012-12-29 | 2013-04-03 | 北方工业大学 | Wharf and ship segmentation method combining multi-source remote sensing image characteristics |
CN103207879A (en) * | 2012-01-17 | 2013-07-17 | 阿里巴巴集团控股有限公司 | Method and equipment for generating image index |
CN103279745A (en) * | 2013-05-28 | 2013-09-04 | 东南大学 | Face identification method based on half-face multi-feature fusion |
-
2016
- 2016-02-25 CN CN201610102197.9A patent/CN107423294A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101814147A (en) * | 2010-04-12 | 2010-08-25 | 中国科学院自动化研究所 | Method for realizing classification of scene images |
CN103207879A (en) * | 2012-01-17 | 2013-07-17 | 阿里巴巴集团控股有限公司 | Method and equipment for generating image index |
CN102982165A (en) * | 2012-12-10 | 2013-03-20 | 南京大学 | Large-scale human face image searching method |
CN102968637A (en) * | 2012-12-20 | 2013-03-13 | 山东科技大学 | Complicated background image and character division method |
CN103020975A (en) * | 2012-12-29 | 2013-04-03 | 北方工业大学 | Wharf and ship segmentation method combining multi-source remote sensing image characteristics |
CN103279745A (en) * | 2013-05-28 | 2013-09-04 | 东南大学 | Face identification method based on half-face multi-feature fusion |
Non-Patent Citations (2)
Title |
---|
冯钟葵 等: "《遥感数据接收与处理技术》", 31 December 2015 * |
罗楠 等: "针对重复模式图像的成对特征点匹配", 《中国图象图形学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110019871A (en) * | 2017-12-29 | 2019-07-16 | 上海全土豆文化传播有限公司 | Image search method and device |
CN110019910A (en) * | 2017-12-29 | 2019-07-16 | 上海全土豆文化传播有限公司 | Image search method and device |
CN111723240A (en) * | 2019-03-20 | 2020-09-29 | 杭州海康威视数字技术股份有限公司 | Image retrieval method and device and electronic equipment |
CN118364130A (en) * | 2024-06-18 | 2024-07-19 | 安徽省农业科学院农业经济与信息研究所 | Image retrieval method and system based on super dictionary |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10922350B2 (en) | Associating still images and videos | |
Li et al. | GPS estimation for places of interest from social users' uploaded photos | |
JP6759844B2 (en) | Systems, methods, programs and equipment that associate images with facilities | |
CN106095829B (en) | Cross-media retrieval method based on deep learning and the study of consistency expression of space | |
CN104850633B (en) | A kind of three-dimensional model searching system and method based on the segmentation of cartographical sketching component | |
CN102693311B (en) | Target retrieval method based on group of randomized visual vocabularies and context semantic information | |
CN106202256B (en) | Web image retrieval method based on semantic propagation and mixed multi-instance learning | |
WO2017118427A1 (en) | Webpage training method and device, and search intention identification method and device | |
CN103473327A (en) | Image retrieval method and image retrieval system | |
CN111291177A (en) | Information processing method and device and computer storage medium | |
Abdul-Rashid et al. | Shrec’18 track: 2d image-based 3d scene retrieval | |
CN102902826A (en) | Quick image retrieval method based on reference image indexes | |
CN107423294A (en) | A kind of community image search method and system | |
Martinet et al. | A relational vector space model using an advanced weighting scheme for image retrieval | |
JP6017277B2 (en) | Program, apparatus and method for calculating similarity between contents represented by set of feature vectors | |
Imran et al. | Event recognition from photo collections via pagerank | |
CN105677830B (en) | A kind of dissimilar medium similarity calculation method and search method based on entity mapping | |
Dourado et al. | Event prediction based on unsupervised graph-based rank-fusion models | |
CN112818140B (en) | Image retrieval method based on multi-mode data augmentation | |
CN113761125A (en) | Dynamic summary determination method and device, computing equipment and computer storage medium | |
Ji et al. | Diversifying the image relevance reranking with absorbing random walks | |
Liu et al. | Creating descriptive visual words for tag ranking of compressed social image | |
Doulamis et al. | 3D modelling of cultural heritage objects from photos posted over the Twitter | |
JP5650607B2 (en) | Document search keyword presentation apparatus and method | |
Derakhshan et al. | A Review of Methods of Instance-based Automatic Image Annotation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171201 |