CN108805148A - Handle the method for image and the device for handling image - Google Patents

Handle the method for image and the device for handling image Download PDF

Info

Publication number
CN108805148A
CN108805148A CN201710295810.8A CN201710295810A CN108805148A CN 108805148 A CN108805148 A CN 108805148A CN 201710295810 A CN201710295810 A CN 201710295810A CN 108805148 A CN108805148 A CN 108805148A
Authority
CN
China
Prior art keywords
image
model
similarity
iconic model
iconic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710295810.8A
Other languages
Chinese (zh)
Other versions
CN108805148B (en
Inventor
曹琼
刘汝杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to CN201710295810.8A priority Critical patent/CN108805148B/en
Publication of CN108805148A publication Critical patent/CN108805148A/en
Application granted granted Critical
Publication of CN108805148B publication Critical patent/CN108805148B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Exemplary embodiment disclosed herein is related to the method for handling image and the device for handling image.According to the method for processing image, at least one iconic model is generated by clustering multiple images, wherein each iconic model is indicated by image similar to each other in described image.If indicating that the number of the image of an iconic model is more than threshold value, according to the image study visual dictionary of expression described image model, and described image model is indicated instead of the image of described image model with the visual dictionary.

Description

Handle the method for image and the device for handling image
Technical field
Exemplary embodiment disclosed herein is related to image procossing.More specifically, exemplary embodiment is related to oneself of image Dynamic classification or identification.
Background technology
With the rapid development of the every field such as digital product, internet, a large amount of urgently analyses are produced, identifies, tissue, divide The picture material of class and retrieval.Identification image information effectively, at image procossing, machine vision, pattern-recognition, artificial intelligence The research hotspot of the multiple fields such as energy, Neuscience.Image classification is important research content therein.
Image classification is that image is distinguished to different classes of target according to the different characteristic reflected in image information Image processing method.Common image classification method can be divided into supervised classification method and non-supervised classification.
Difference lies in whether obtain using training data the class of priori for supervised classification method and non-supervised classification Other knowledge.The samples selection characteristic parameter that supervised classification method is provided according to training dataset, establishes discriminant function, to be sorted Image is classified.Therefore, supervised classification method is dependent on selected training data.In contrast, non-supervised classification is not More prioris are needed, but are only classified according to the natural Clustering features of image data.Therefore, unsupervised classification side Method is simple and has higher accuracy.Non-supervised classification another example is K mean values (K-means) methods.
Invention content
According to an exemplary embodiment disclosed herein, a kind of method of processing image is provided.According to this method, lead to Cluster multiple images are crossed to generate at least one iconic model, wherein each iconic model is by figure similar to each other in described image As indicating.If indicating that the number of the image of an iconic model is more than threshold value, according to the figure for indicating described image model As study visual dictionary, and described image mould is indicated instead of the image of described image model with the visual dictionary Type.
According to another exemplary embodiment disclosed herein, a kind of method of processing image is provided.According to this method, The similarity between described image and at least one iconic model is calculated, and will be above the higher similarity institute of similarity threshold Corresponding iconic model is identified as the iconic model belonging to described image.If a described image model is by least one generation Table image is then calculated based on the similarity between described image and the representative image come the first kind iconic model indicated Similarity between described image and described image model.If a described image model indicated by visual dictionary second Types of image model, then the similarity between the feature based on described image and the vision word of the visual dictionary is to calculate State the similarity between image and described image model.
According to another exemplary embodiment disclosed herein, a kind of device for handling image is provided, including extremely A few processor.At least one processor is configured to the method for executing exemplary embodiment disclosed herein.
Below with reference to the accompanying drawings the further characteristics and advantages of detailed description of the present invention exemplary embodiment, and the present invention Exemplary embodiment structurally and operationally.It should be noted that the present invention is not limited to specific embodiments described herein.Go out herein Existing such embodiment is solely for the purpose of illustration.Various equivalent modifications will recognize that other according to teachings contained herein Embodiment.
Description of the drawings
Exemplary embodiment disclosed herein is illustrated by example in the accompanying drawings, but these examples do not generate limit to the present invention It makes, similar element is designated with like reference numerals in figure, wherein:
Fig. 1 is the flow chart of the method for illustrating generation iconic model according to an exemplary embodiment;
Fig. 2 is the flow chart for illustrating image classification method according to an exemplary embodiment;
Fig. 3 is the flow chart for illustrating similarity calculating method according to an exemplary embodiment;
Fig. 4 is the flow chart for illustrating similarity calculating method according to another exemplary embodiment;
Fig. 5 is the flow chart for illustrating image classification method according to another exemplary embodiment;
Fig. 6 is the pseudo-code that image classification for illustrating according to an exemplary embodiment judges algorithm;
Fig. 7 is the flow chart for illustrating iconic model merging method according to an exemplary embodiment;
Fig. 8 is the flow chart of the image classification method for illustrating the modification as the exemplary embodiment to Fig. 2;
Fig. 9 is the flow chart of the image classification method for illustrating the modification as the exemplary embodiment to Fig. 5;
Figure 10 is the flow chart for illustrating iconic model update method according to an exemplary embodiment;
Figure 11 is the block diagram for the exemplary system for illustrating the various aspects for realizing exemplary embodiment disclosed herein.
Specific implementation mode
Below with reference to the accompanying drawings exemplary embodiment disclosed herein is described.It should be noted that for purposes of clarity, in attached drawing Be omitted in description in relation to those skilled in the art will appreciate that the still part unrelated with exemplary embodiment and mistake The expression of journey and explanation.
It will be understood to those skilled in the art that the various aspects of exemplary embodiment may be implemented as system, method or Computer program product.Therefore, the various aspects of exemplary embodiment can be with specific implementation is as follows, that is, can be Full hardware embodiment, complete software embodiment (including firmware, resident software, microcode etc.) or integration software part and hardware Partial embodiment, herein can commonly referred to as " circuit ", " module " or " system ".In addition, each side of exemplary embodiment Face can take the form for the computer program product for being presented as one or more computer-readable mediums, computer-readable Jie Computer readable program code is embodied with above matter.It can be for example by computer network come distributing computer program, Huo Zheji Calculation machine program can be located on one or more remote servers, or be embedded into the memory of equipment.
Any combinations of one or more computer-readable mediums can be used.Computer-readable medium can be computer Readable signal medium or computer readable storage medium.Computer readable storage medium for example may be, but not limited to, electricity, magnetic , any combination appropriate of light, electromagnetism, infrared ray or semiconductor system, device or aforementioned items. The more specific example (non exhaustive list) of computer readable storage medium includes following:There are one or multiple conducting wires be electrically connected Connect, portable computer diskette, hard disk, random access memory (RAM), read-only memory (ROM), erasable type may be programmed it is read-only Memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic storage dress It sets or any combination appropriate of aforementioned items.In this paper contexts, computer readable storage medium can be it is any containing or Store program being used for instruction execution system, device or being associated with instruction execution system, device Tangible medium.
Computer-readable signal media may include the wherein band of the part propagation for example in a base band or as carrier wave There is the data-signal of computer readable program code.Such transmitting signal can take any form appropriate, including but not Being limited to electromagnetism, light or its any combination appropriate.
Computer-readable signal media can be different from computer readable storage medium, can convey, propagates or pass It is defeated being used for instruction execution system, device or the program that is associated with instruction execution system, device to appoint A kind of what computer-readable medium.
Any medium transmission appropriate may be used in the program code embodied in computer-readable medium, including but unlimited In any combination appropriate of wireless, wired, optical cable, radio frequency etc. or above-mentioned items.
The computer program code of the operation of various aspects for executing exemplary embodiment disclosed herein can be with one Kind or any combinations of multiple programming languages write, described program design language includes object-oriented programming language Speech, such as Java, Smalltalk, C++ etc further include conventional procedural programming language, such as " C " programming Language or similar programming language.
Flow chart referring to method, equipment (system) and computer program product according to exemplary embodiment and/ Or block diagram describes the various aspects of exemplary embodiment disclosed herein.It should be appreciated that each side of flowchart and or block diagram The combination of each box can be realized by computer program instructions in frame and flowchart and or block diagram.These computer programs Instruction can be supplied to the processor of all-purpose computer, special purpose computer or other programmable data processing devices to produce one Kind machine so that generated for realizing flow chart by these instructions that computer or other programmable data processing devices execute And/or the device of function/operation specified in the box in block diagram.
These computer program instructions can also be stored in, which can guide computer or other programmable datas to handle, sets In standby computer-readable medium operate in a specific manner so that the instruction stored in computer-readable medium generates a packet Include the manufacture of the instruction of function/operation specified in the box in implementation flow chart and/or block diagram.
Computer program instructions can also be loaded into computer or other programmable data processing devices, cause counting Series of operation steps are executed on calculation machine or other programmable data processing devices to generate computer implemented process so that The instruction executed on computer or other programmable devices provides work(specified in the box in implementation flow chart and/or block diagram The process of energy/operation.
Fig. 1 is the flow chart of the method 100 for illustrating generation iconic model according to an exemplary embodiment.
As shown in Figure 1, method 100 is since step 101.In step 103, by clustering multiple images I1-IMCome generate to A few iconic model O1-ON, N >=1.Clustering method according between image similitude or image to each other in feature space Distance classifies to image.When being clustered, so that it is target that certain clustering criteria, which reaches extreme value, to obtain image Cluster result.The example of clustering method include but not limited to iteration Dynamic Clustering Algorithm (such as C mean algorithms and ISODATA algorithms) and non-iterative hierarchical clustering algorithm.By cluster, image I1-IMIt is divided into different groups.In each group Image is similar to each other or distance is close in feature space.Such image group is referred to as iconic model.Each iconic model Oi The image I that can be contained by itj∈Oi(also referred to as iconic model OiRepresentative image) indicate.It in the disclosure also will in this way Iconic model be known as the iconic model of the first kind.Each iconic model OiIt can be stored as the image I being contained by itjThis Body can also be stored as from each image IjThe feature of each characteristic point of middle extraction.
In step 105, the present image model O in the iconic model obtained by step 103 is determinedkPicture number be No is more than threshold value.If picture number is not above threshold value, method 100 advances to step 111.If picture number is more than threshold Value, then in step 107 according to expression iconic model OkImage study visual dictionary.Such as can by from indicate iconic model OkImage in extraction feature and the feature of extraction is clustered to obtain iconic model OkVisual dictionary.
In one example, multiple attributes can be extracted as characteristics of image, with the embedded different line in iconic model Rope.For example, the feature extracted may include Scale invariant features transform (SIFT) feature and/or color designation (CN) feature.
It, can be by the color name of localized mass in image in the example that the feature extracted includes color designation feature Feature calculation is referred to as the mean value of the color designation feature of all pixels in the localized mass.By taking image indexing system as an example, in image Usually using Scale invariant features transform (SIFT) feature of description partial gradient distribution in searching system, and using the row's of falling rope It attracts and establishes the image indexing system based on bag of words (BoW), wherein each entry corresponds in the code book of SIFT feature The vision word of definition.However, leading to the ignorance to other characteristics (such as color) of image to the dependence of SIFT feature.This Problem causes many false-alarms to match together with the information loss during quantization.In order to enhance the resolving ability of SIFT vision words, Color designation feature may be used, 11-D vectors are distributed for each pixel.Around each characteristic point detected, it can obtain To the localized mass with the area proportional to feature point scale.Then, the CN vectors of each pixel in the region are calculated, and And average CN vectors are calculated as color characteristic.
In step 109, instead of with comprising image indicate the mode of iconic model, indicated with the visual dictionary of study Iconic model.Such iconic model is also known as to the iconic model of Second Type in the disclosure.When iconic model includes When picture number is more, the storage and processing of image or characteristics of image, which occupy, in practical applications needs more expense.It compares Under, indicate that iconic model can reduce the expense in application using visual dictionary.
In step 111, determines and whether there is next untreated image in the iconic model obtained by step 103 Model.If there is next untreated iconic model, then next untreated iconic model is set as currently scheming As model and method 100 is back to step 105.If there is no next untreated iconic model, then method 100 is in step Rapid 113 terminate.
Fig. 2 is the flow chart for illustrating image classification method 200 according to an exemplary embodiment.In this example In the application scenarios of property embodiment, in the iconic model O for representing different classifications1-ON, the figure in N >=1 belonging to identification query image q As model, to be the classification belonging to query image q by the Classification and Identification of the iconic model identified.
As shown in Fig. 2, method 200 is since step 201.In step 203, iconic model O is determined1-ONIn present image Model OkIt is the first kind or Second Type.
If present image model OkThe iconic model of the first kind, then step 205 be based on query image q with it is current Iconic model OkRepresentative image between similarity calculate query image q and present image model OkBetween similarity.
If present image model OkIt is the iconic model of Second Type, then in feature of the step 207 based on query image q With present image model OkVisual dictionary vision word between similarity calculate query image q and present image model OkBetween similarity.
After calculating similarity, then in step 209, determine whether the similarity calculated is higher than similarity threshold. If the similarity calculated is not higher than similarity threshold, method 200 advances to step 213.If the similarity calculated is higher than Similarity threshold, then in step 211 by corresponding present image model OkIt is identified as the iconic model belonging to query image q.It connects It method 200 and advances to step 215.
In one example, similarity can be measured using Euclidean distance.In another example, it can use Hamming distance measures similarity.Smaller distance indicates the higher degree of correlation.
In one example, characteristics of image includes CN features.In view of each dimension of CN features has specific semanteme Meaning can carry out each dimension of CN features binaryzation to generate binary features.
It in one example, can be by the similar of all features (such as SIFT feature and/or CN features) of query image Degree carries out the combination of such as arithmetic average or weighted sum to obtain final similarity.
In step 213, iconic model O is determined1-ONIn whether there is next untreated iconic model.If there is under One untreated iconic model then sets next untreated iconic model to present image model and method 200 are back to step 203.If there is no next untreated iconic model, then method 200 is in step 215 end.
In method shown in Fig. 2, the similarity identified first in step 209 is higher than to the present image of similarity threshold Model Identification is the iconic model belonging to query image.In an illustrative modification embodiment, inquiry can calculated It will be above the phase of a highest similarity in the similarity of similarity threshold after similarity between image and all iconic models Iconic model is answered to be identified as the iconic model belonging to query image.In another illustrative modification embodiment, it can count It will be above at least two phases in the similarity of similarity threshold after calculating query image and the similarity between all iconic models The respective image Model Identification of same highest similarity is the iconic model belonging to query image.In another illustrative modification In embodiment, the phase of similarity threshold can be will be above after calculating the similarity between query image and all iconic models Know like the respective image model for meeting predetermined at least two similarities for being near the mark and being higher than other similarities in degree each other Iconic model that Wei be belonging to query image.
Fig. 3 is the flow chart of the similarity calculating method for illustrating step 205 according to an exemplary embodiment.
As shown in figure 3, method 300 is since step 301.In step 303, the identification feature point p from query image q1-pL And extract the feature of characteristic point.In step 305, for the current signature point p of query image qt, from present image model Ok Representative image in selection selection one representative image in characteristic point, wherein characteristic point ptFeature and selected representatives figure Degree of closeness between the feature of the characteristic point of picture meets pre-provisioning request, such as similarity is less than threshold value higher than threshold level, distance Level, similarity highest, apart from minimum etc..Then characteristic point p is calculatedtFeature and the feature of characteristic point that accordingly selects it Between similarity St,k
In step 307, determines and whether there is next untreated characteristic point in the characteristic point of query image q.If looked into It askes in the characteristic point of image q there are next untreated characteristic point, is then switched to next untreated characteristic point and square Method 300 returns to step 305.If next untreated characteristic point is not present in the characteristic point of query image q, in step 309, the characteristic point p based on query image q1-pLFeature and the feature of the characteristic point accordingly selected between similarity S1,k- SL,kTo calculate query image q and present image model OkBetween similarity Sk.For example, similarity SkIt can be calculated as each The similarity S of the feature of characteristic point1,k-SL,kArithmetic average or weighted sum.
In step 311, iconic model O is determined1-ONIn whether there is next untreated iconic model.If image mould Type O1-ONIn there are next untreated iconic models, then be switched to next untreated iconic model and method 300 Back to step 305.If iconic model O1-ONIn be not present next untreated iconic model, then method is in step 313 Terminate.
Those skilled in the art is, it will be observed that the operation of the extraction feature of step 303 can also be in the flow of method 300 Except execute, such as in the flow of method 200 before the operation of the calculating similarity of step 205 at the time of, executes.
Fig. 4 is the flow chart of the similarity calculating method for illustrating step 207 according to an exemplary embodiment.
As shown in figure 4, method 400 is since step 401.In step 403, the identification feature point p from query image q1-pL And extract the feature of characteristic point.In step 405, for the current signature point p of query image qt, from present image model Ok Visual dictionary in selection selection vision word, wherein characteristic point ptFeature and selected vision word between degree of closeness Meet pre-provisioning request, such as similarity is less than threshold level, similarity highest, apart from minimum etc. higher than threshold level, distance. Then characteristic point p is calculatedtFeature and the vision word accordingly selected between similarity St,k
In step 407, determines and whether there is next untreated characteristic point in the characteristic point of query image q.If looked into It askes in the characteristic point of image q there are next untreated characteristic point, is then switched to next untreated characteristic point and square Method 400 returns to step 405.If next untreated characteristic point is not present in the characteristic point of query image q, in step 409, the characteristic point p based on query image q1-pLFeature and the vision word accordingly selected between similarity S1,k-SL,kCome Calculate query image q and present image model OkBetween similarity Sk.For example, similarity SkIt can be calculated as each feature The similarity S of the feature of point1,k-SL,kArithmetic average or weighted sum.
In step 411, iconic model O is determined1-ONIn whether there is next untreated iconic model.If image mould Type O1-ONIn there are next untreated iconic models, then be switched to next untreated iconic model and method 400 Back to step 405.If iconic model O1-ONIn be not present next untreated iconic model, then method is in step 413 Terminate.
Those skilled in the art is, it will be observed that the operation of the extraction feature of step 403 can also be in the flow of method 400 Except execute, such as in the flow of method 200 before the operation of the calculating similarity of step 207 at the time of, executes.
When according to the similarity between query image and iconic model come when identifying the iconic model belonging to query image, may There is a situation where that query image and the similarity of multiple images model are higher and closer to each other.It in this case, can be with As in the embodiment described in conjunction with Fig. 2 by this multiple images Model Identification be query image belonging to iconic model, It can be an iconic model by this multiple images model combination, and identify inquiry in by combined multiple images model Iconic model belonging to image.
Fig. 5 is the flow chart for illustrating image classification method 500 according to an exemplary embodiment, wherein comprising closing And the processing of iconic model.
As shown in figure 5, method 500 is since step 501.In step 503, iconic model O is determined1-ONIn present image Model OkIt is the first kind or Second Type.
If present image model OkThe iconic model of the first kind, then step 505 be based on query image q with it is current Iconic model OkRepresentative image between similarity calculate query image q and present image model OkBetween similarity.
If present image model OkIt is the iconic model of Second Type, then in feature of the step 507 based on query image q With present image model OkVisual dictionary vision word between similarity calculate query image q and present image model OkBetween similarity.
After calculating similarity, then iconic model O is determined in step 5091-ONIn do not locate with the presence or absence of next The iconic model of reason.If there is next untreated iconic model, then next untreated iconic model is arranged It is back to step 503 for present image model and method 500.If there is no next untreated iconic model, then exist Step 511 determines whether there is degree of closeness of at least two iconic models with higher similarity and its similarity and meets in advance Fixed condition.Such as makes a reservation for be near the mark and be higher than other with the presence or absence of meeting each other in the similarity higher than similarity threshold At least two similarities of similarity.If there is no such at least two similarity, then method 500 proceeds to step 515. If there is such at least two similarity, then in step 513 by the respective image model of such at least two similarity An iconic model is merged into, and calculates query image and merges the similarity between iconic model.
In step 515, the iconic model with the highest similarity higher than threshold value is identified as to the figure belonging to query image As model.Then method 500 terminates in step 517.
In the processing of step 513, directly combined iconic model is identified as inquiring after can also merging Iconic model belonging to image, without calculating query image and the similarity merged between iconic model and executing step 515.
Fig. 6 is the pseudo-code that image classification for illustrating according to an exemplary embodiment judges algorithm, which provide Merge the specific example with recognition logic.
In the example depicted in fig. 6, it is assumed that there are query image q and n conventional images model.The query image calculated Similarity between iconic model is arranged as S in descending orderk1,Sk2,…,Skn, wherein kj is the number of iconic model.It is shown in Fig. 6 Example in, the degree of closeness of similarity is weighed by the ratio R between similarity.If ratio R is more than threshold value th2, really Determine similarity to keep off;Otherwise determine that similarity is close.In the example depicted in fig. 6, if it find that two highest similarities connect Closely, then merge respective image model, then restart iterative processing.
Fig. 7 is the flow chart for illustrating iconic model merging method 700 according to an exemplary embodiment.
As shown in fig. 7, method 700 is since step 701.In step 703, determination at least two iconic models to be merged One of whether the iconic model of Second Type (being indicated by visual dictionary).If at least two iconic models to be merged are not It is the iconic model of Second Type, then method 700 proceeds to step 709.In step 705, if at least two figures to be merged As one of model is the iconic model of Second Type, then learn to be used for according to the expression at least two iconic models to be merged Indicate the visual dictionary of combined iconic model.The expression of iconic model both may be image representative image itself, it is also possible to Visual dictionary.If combined iconic model is indicated by representative image, characteristic point is extracted from representative image Feature, and the feature of extraction is clustered to learn the visual dictionary for the iconic model for indicating combined.If merged Iconic model indicated by visual dictionary, then the vision word of these video dictionaries is clustered with learn indicate close And iconic model visual dictionary.If the existing iconic model indicated by visual dictionary of combined iconic model, and have The iconic model indicated with representative image, then extract the feature of characteristic point from representative image, and to the feature of extraction and The vision word of video dictionary is clustered to learn the visual dictionary for the iconic model for indicating combined.Then in step 707 Combined iconic model is indicated with the visual dictionary of study.Then method 700 terminates in step 713.
In step 709, whether the number of the representative image of the determination iconic model to be merged is more than threshold value.If merged Iconic model representative image number be more than threshold value, then method 700 proceed to step 705.If combined image mould The number of the representative image of type is less than threshold value, then indicates to close with the representative image of combined iconic model in step 711 And iconic model.Then method 700 terminates in step 713.
Determine that the iconic model belonging to query image is the basic function of image classification.In addition it is also possible to according to query graph As updating iconic model.It, can be by the way that query image be merged in the case where identifying the iconic model belonging to query image To being updated in the iconic model identified.
Fig. 8 is the flow chart of the image classification method 800 for illustrating the modification as the exemplary embodiment to Fig. 2.
As shown in figure 8, method 800 is since step 801.In step 803, iconic model O is determined1-ONIn present image Model OkIt is the first kind or Second Type.
If present image model OkThe iconic model of the first kind, then step 805 be based on query image q with it is current Iconic model OkRepresentative image between similarity calculate query image q and present image model OkBetween similarity.
If present image model OkIt is the iconic model of Second Type, then in feature of the step 807 based on query image q With present image model OkVisual dictionary vision word between similarity calculate query image q and present image model OkBetween similarity.
After calculating similarity, then in step 809, determine whether the similarity calculated is higher than similarity threshold. If the similarity calculated is not higher than similarity threshold, method 800 advances to step 813.If the similarity calculated is higher than Similarity threshold, then in step 811 by corresponding present image model OkIt is identified as the iconic model belonging to query image q.? Step 815, identified iconic model is updated in identified iconic model by the way that query image to be merged into.Then method 800 advance to step 817.If determining that the similarity calculated is not higher than similarity threshold in step 809, method 800 is advanced To step 813.
In step 813, iconic model O is determined1-ONIn whether there is next untreated iconic model.If there is under One untreated iconic model then sets next untreated iconic model to present image model and method 800 are back to step 803.If there is no next untreated iconic model, then method 800 is in step 817 end.
In one modification of the exemplary embodiment of the image classification method described in front, if not identifying inquiry Iconic model belonging to image can then establish new images model and be the figure belonging to query image by new images Model Identification As model.This new images model is using query image as representative image.
Fig. 9 is the flow chart of the image classification method 900 for illustrating the modification as the exemplary embodiment to Fig. 5.
As shown in figure 9, method 900 is since step 901.In step 903, iconic model O is determined1-ONIn present image Model OkIt is the first kind or Second Type.
If present image model OkThe iconic model of the first kind, then step 905 be based on query image q with it is current Iconic model OkRepresentative image between similarity calculate query image q and present image model OkBetween similarity.
If present image model OkIt is the iconic model of Second Type, then in feature of the step 907 based on query image q With present image model OkVisual dictionary vision word between similarity calculate query image q and present image model OkBetween similarity.
After calculating similarity, then iconic model O is determined in step 9091-ONIn do not locate with the presence or absence of next The iconic model of reason.If there is next untreated iconic model, then next untreated iconic model is arranged It is back to step 903 for present image model and method 900.If there is no next untreated iconic model, then exist Step 911 determines whether there is degree of closeness of at least two iconic models with higher similarity and its similarity and meets in advance Fixed condition.Such as makes a reservation for be near the mark and be higher than other with the presence or absence of meeting each other in the similarity higher than similarity threshold At least two similarities of similarity.If there is no such at least two similarity, then method 900 proceeds to step 915. If there is such at least two similarity, then in step 913 by the respective image model of such at least two similarity An iconic model is merged into, and calculates query image and merges the similarity between iconic model.
In step 915, the iconic model with the highest similarity higher than threshold value is identified as to the figure belonging to query image As model.In step 917, identified image mould is updated in identified iconic model by the way that query image to be merged into Type.Then method 900 terminates in step 919.
In the processing of step 913, directly combined iconic model is identified as inquiring after can also merging Iconic model belonging to image, and method proceeds to step 917.
Figure 10 is the flow chart for illustrating iconic model update method 1000 according to an exemplary embodiment.
As shown in Figure 10, method 1000 is since step 1001.In step 1003, whether determination wants newer iconic model The iconic model of Second Type (being indicated by visual dictionary).If it is the image mould of Second Type to want newer iconic model not Type, then method 1000 proceed to step 1009.In step 1005, if it is the image mould of Second Type to want newer iconic model Type, then feature and expression according to the feature for extracting characteristic point from query image, and to extraction want newer image mould The vision word of the video dictionary of type is clustered to learn to indicate the visual dictionary of updated iconic model.If will be more New iconic model is the iconic model of the first kind (being indicated by representative image), then according to from query image and representative image The feature of characteristic point is extracted, and the feature of extraction is clustered to learn to indicate the vision of updated iconic model Dictionary.Then updated iconic model is indicated in the visual dictionary of step 1007 study.Then method 1000 is in step 1013 terminate.
In step 1009, determine whether the number+1 for the representative image for wanting newer iconic model is more than threshold value.If wanted The number+1 of the representative image of newer iconic model is more than threshold value, then method 1000 proceeds to step 1005.If updated The number+1 of representative image of iconic model be less than threshold value, then in step 1011 query image and want newer image mould The representative image of type indicates updated iconic model.Then method 1000 terminates in step 1013.
Figure 11 is the block diagram for the exemplary system for illustrating the various aspects for realizing exemplary embodiment disclosed herein.
In fig. 11, central processing unit (CPU) 1101 according to the program stored in read-only memory (ROM) 1102 or from The program that storage section 1108 is loaded into random access storage device (RAM) 1103 executes various processing.In RAM 1103, also root The data required when CPU 1101 executes various processing etc. are stored according to needs.
CPU 1101, ROM 1102 and RAM 1103 are connected to each other via bus 1104.Input/output interface 1105 also connects It is connected to bus 1104.
Following component is connected to input/output interface 1105:Importation 1106 including keyboard, mouse etc.;Including Such as the output par, c 1107 of the display and loud speaker etc. of cathode-ray tube (CRT), liquid crystal display (LCD) etc.;Packet Include the storage section 1108 of hard disk etc.;With the communications portion for including such as network interface card of LAN card, modem etc. 1109.Communications portion 1109 executes communication process via the network of such as internet.
As needed, driver 1110 is also connected to input/output interface 1105.Such as disk, CD, magneto-optic disk, half The removable media 1111 of conductor memory etc. is installed in as needed on driver 1110 so that the calculating read out Machine program is mounted to storage section 1108 as needed.
By software realization above-mentioned steps and processing, the network from such as internet or for example removable Jie The storage medium installation of matter 1111 constitutes the program of software.
Term used herein is of the invention just for the sake of the purpose rather than intended limitation of description specific embodiment.This " one " of singulative used herein and " being somebody's turn to do " is intended to also include plural form, unless clearly being otherwise indicated in context. It should also be understood that " including " word when used in this manual, illustrate there are pointed feature, entirety, step, operation, Unit and/or component, but it is not excluded that in the presence of or increase one or more of the other feature, entirety, step, operation, unit and/ Or component and/or combination thereof.
Device that counter structure, material, operation in following following claims and all functionalities limit or step etc. With replacing, it is intended to including it is any for other units for specifically noting in the claims it is combined execute the knot of the function Structure, material or operation.The description that front carries out the present invention is intended merely to illustrate and describe, and is not used to open form The present invention be defined in detail and limit.For person of an ordinary skill in the technical field, without departing from the present invention In the case of scope and spirit, it is clear that can be with many modifications may be made and modification.Embodiment is selected and explained, is in order to best The principle of the present invention and practical application are explained in ground, and person of an ordinary skill in the technical field is enable to be illustrated, and the present invention can be with There are the various embodiments with various changes of suitable desired special-purpose.
There has been described following illustrative embodiment (" note " being used to indicate).
A kind of 1. methods of processing image are attached, including:
At least one iconic model is generated by clustering multiple images, wherein each iconic model is by that in described image This similar image indicates;And
If indicating that the number of the image of an iconic model is more than threshold value, according to the image for indicating described image model Learn visual dictionary, and described image model is indicated instead of the image of described image model with the visual dictionary.
Note 2. as note 1 as described in method, wherein it is described study based on from expression described image model image in carry The feature taken, and the feature includes Scale invariant features transform feature and/or color designation feature.
Method of the note 3. as described in note 2, wherein in the case where the feature includes color designation feature, an office The color designation feature of portion's block is calculated as the mean value of the color designation feature of all pixels in the localized mass.
A kind of 4. methods of processing image are attached, including:
Calculate the similarity between described image and at least one iconic model;And
It will be above the iconic model corresponding to the higher similarity of similarity threshold and be identified as image belonging to described image Model,
Wherein if a described image model is the first kind iconic model indicated by least one representative image, The phase between described image and described image model is then calculated based on the similarity between described image and the representative image Like degree, and
If the Second Type iconic model that a described image model is indicated by visual dictionary, it is based on described image Feature and the vision word of the visual dictionary between similarity calculate between described image and described image model Similarity.
Method of the note 5. as described in note 4, wherein the feature that the visual dictionary is based on includes scale invariant feature Transform characteristics and/or color designation feature.
Method of the note 6. as described in note 4, wherein the calculating of the similarity includes:
If a described image model is first kind iconic model,
For each characteristic point of described image, selected in a representative image from the representative image of described image model Characteristic point, wherein between the feature of the characteristic point of the feature of the characteristic point of described image and selected representative image close to journey Degree meets pre-provisioning request;And
Similarity between the feature and the feature of the characteristic point accordingly selected of each characteristic point based on described image come Calculate the similarity between described image and described image model.
Method of the note 7. as described in note 4, wherein the calculating of the similarity includes:
If a described image model is Second Type iconic model,
For each characteristic point of described image, vision word is selected from the visual dictionary of described image model, wherein Degree of closeness between the feature of the characteristic point of described image and selected vision word meets pre-provisioning request;And
Similarity between the feature and the vision word accordingly selected of each characteristic point based on described image calculates Similarity between described image and described image model.
Method of the note 8. as described in note 6 or 7, wherein the similarity is calculated as the phase of the feature of each characteristic point Like the weighted sum of degree.
Note 9. as be attached 6 or 7 as described in method, wherein the feature that the calculating of the similarity is based on include scale not Become eigentransformation feature and/or color designation feature.
Method of the note 10. as described in note 4, wherein the identification includes:
If there is at least two iconic models there is the degree of closeness of higher similarity and its similarity to meet predetermined At least two iconic model is then merged into an iconic model by condition;And
Combined iconic model is identified as the iconic model belonging to described image,
Wherein if one of described at least two iconic model is indicated by visual dictionary, the merging includes:
Learn the visual dictionary for indicating combined iconic model according to the expression of at least two iconic model.
Note 11. as be attached 10 as described in method, wherein if at least two iconic model by representative image Lai It indicates and the number of the representative image is more than threshold value, then the merging includes:
Learnt for indicating combined image mould according to the representative image expression for indicating at least two iconic model The visual dictionary of type.
Note 12. as be attached 10 as described in method, wherein if at least two iconic model by representative image Lai It indicates and the number of the representative image is no more than threshold value, then the merging includes:
Combined iconic model is indicated with the representative image for indicating at least two iconic model.
It is attached 13. method as described in note 4,10,11 or 12, further includes:
If the iconic model identified is indicated by visual dictionary, according to the vision for indicating identified iconic model Dictionary and described image learn the visual dictionary for indicating identified iconic model.
It is attached 14. method as described in note 4,10,11 or 12, further includes:
If the iconic model identified is indicated by representative image and the sum of the representative image and described image More than threshold value, then visual dictionary is learnt come instead of the iconic model identified according to the representative image and described image Representative image.
It is attached 15. method as described in note 4,10,11 or 12, further includes:
If the iconic model identified is indicated by representative image and the sum of the representative image and described image No more than threshold value, then identified iconic model is indicated with the representative image and described image.
A kind of 16. devices for handling image are attached, including:
At least one processor, is configured to:
At least one iconic model is generated by clustering multiple images, wherein each iconic model is by that in described image This similar image indicates;And
If indicating that the number of the image of an iconic model is more than threshold value, according to the image for indicating described image model Learn visual dictionary, and described image model is indicated instead of the image of described image model with the visual dictionary.
Device of the note 17. as described in note 16, wherein the study is based on from the image for indicating described image model The feature of extraction, and the feature includes Scale invariant features transform feature and/or color designation feature.
Such as device described in note 17 of note 18., wherein in the case where the feature includes color designation feature, one The color designation feature of localized mass is calculated as the mean value of the color designation feature of all pixels in the localized mass.
A kind of 19. devices for handling image are attached, including:
At least one processor, is configured to:
Calculate the similarity between described image and at least one iconic model;And
It will be above the iconic model corresponding to the higher similarity of similarity threshold and be identified as image belonging to described image Model,
Wherein if a described image model is the first kind iconic model indicated by least one representative image, The phase between described image and described image model is then calculated based on the similarity between described image and the representative image Like degree, and
If the Second Type iconic model that a described image model is indicated by visual dictionary, it is based on described image Feature and the vision word of the visual dictionary between similarity calculate between described image and described image model Similarity.
Device of the note 20. as described in note 19, wherein the feature that the visual dictionary is based on includes Scale invariant spy Levy transform characteristics and/or color designation feature.
Device of the note 21. as described in note 19, wherein the calculating of the similarity includes:
If a described image model is first kind iconic model,
For each characteristic point of described image, selected in a representative image from the representative image of described image model Characteristic point, wherein between the feature of the characteristic point of the feature of the characteristic point of described image and selected representative image close to journey Degree meets pre-provisioning request;And
Similarity between the feature and the feature of the characteristic point accordingly selected of each characteristic point based on described image come Calculate the similarity between described image and described image model.
Device of the note 22. as described in note 19, wherein the calculating of the similarity includes:
If a described image model is Second Type iconic model,
For each characteristic point of described image, vision word is selected from the visual dictionary of described image model, wherein Degree of closeness between the feature of the characteristic point of described image and selected vision word meets pre-provisioning request;And
Similarity between the feature and the vision word accordingly selected of each characteristic point based on described image calculates Similarity between described image and described image model.
Device of the note 23. as described in note 21 or 22, wherein the similarity is calculated as the feature of each characteristic point Similarity weighted sum.
Device of the note 24. as described in note 21 or 22, wherein the feature that the calculating of the similarity is based on includes ruler Spend invariant features transform characteristics and/or color designation feature.
Device of the note 25. as described in note 19, wherein the identification includes:
If there is at least two iconic models there is the degree of closeness of higher similarity and its similarity to meet predetermined At least two iconic model is then merged into an iconic model by condition;And
Combined iconic model is identified as the iconic model belonging to described image,
Wherein if one of described at least two iconic model is indicated by visual dictionary, the merging includes:
Learn the visual dictionary for indicating combined iconic model according to the expression of at least two iconic model.
Note 26. as be attached 25 as described in device, wherein if at least two iconic model by representative image Lai It indicates and the number of the representative image is more than threshold value, then the merging includes:
Learnt for indicating combined image mould according to the representative image expression for indicating at least two iconic model The visual dictionary of type.
Note 27. as be attached 25 as described in device, wherein if at least two iconic model by representative image Lai It indicates and the number of the representative image is no more than threshold value, then the merging includes:
Combined iconic model is indicated with the representative image for indicating at least two iconic model.
Device of the note 28. as described in note 19,25,26 or 27, wherein the processor is additionally configured to:
If the iconic model identified is indicated by visual dictionary, according to the vision for indicating identified iconic model Dictionary and described image learn the visual dictionary for indicating identified iconic model.
Device of the note 29. as described in note 19,25,26 or 27, wherein the processor is additionally configured to:
If the iconic model identified is indicated by representative image and the sum of the representative image and described image More than threshold value, then visual dictionary is learnt come instead of the iconic model identified according to the representative image and described image Representative image.
Device of the note 30. as described in note 19,25,26 or 27, wherein the processor is additionally configured to:
If the iconic model identified is indicated by representative image and the sum of the representative image and described image No more than threshold value, then identified iconic model is indicated with the representative image and described image.

Claims (10)

1. a kind of method of processing image, including:
Calculate the similarity between described image and at least one iconic model;And
It will be above the iconic model corresponding to the higher similarity of similarity threshold and be identified as iconic model belonging to described image,
Wherein if a described image model is the first kind iconic model indicated by least one representative image, base Similarity between described image and the representative image calculates the similarity between described image and described image model, And
If the Second Type iconic model that a described image model is indicated by visual dictionary, the spy based on described image Sign is similar between described image and described image model to calculate to the similarity between the vision word of the visual dictionary Degree.
2. the method as described in claim 1, wherein the calculating of the similarity includes:
If a described image model is first kind iconic model,
For each characteristic point of described image, from the spy selected in the representative image of described image model in a representative image Point is levied, the wherein degree of closeness between the feature of the characteristic point of the feature of the characteristic point of described image and selected representative image is full Sufficient pre-provisioning request;And
Similarity between the feature and the feature of the characteristic point accordingly selected of each characteristic point based on described image calculates Similarity between described image and described image model.
3. the method as described in claim 1, wherein the calculating of the similarity includes:
If a described image model is Second Type iconic model,
For each characteristic point of described image, vision word is selected from the visual dictionary of described image model, wherein described Degree of closeness between the feature of the characteristic point of image and selected vision word meets pre-provisioning request;And
Similarity between the feature and the vision word accordingly selected of each characteristic point based on described image is described to calculate Similarity between image and described image model.
4. the method as described in claim 1, wherein the identification includes:
If there is at least two iconic models there is the degree of closeness of higher similarity and its similarity to meet predetermined condition, At least two iconic model is then merged into an iconic model;And
Combined iconic model is identified as the iconic model belonging to described image,
Wherein if one of described at least two iconic model is indicated by visual dictionary, the merging includes:
Learn the visual dictionary for indicating combined iconic model according to the expression of at least two iconic model.
5. method as claimed in claim 4, wherein if at least two iconic model is indicated simultaneously by representative image And the number of the representative image is more than threshold value, then the merging includes:
Learnt for indicating combined iconic model according to the representative image expression for indicating at least two iconic model Visual dictionary.
6. the method as described in claim 1,4 or 5, further includes:
If the iconic model identified is indicated by visual dictionary, according to the visual dictionary for indicating identified iconic model Learn the visual dictionary for indicating identified iconic model with described image.
7. the method as described in claim 1,4 or 5, further includes:
If the iconic model identified is indicated by representative image and the sum of the representative image and described image is more than Threshold value then learns visual dictionary come the representative instead of the iconic model identified according to the representative image and described image Image.
8. the method as described in claim 1,4 or 5, further includes:
If the iconic model identified is indicated by representative image and the sum of the representative image and described image does not surpass Threshold value is crossed, then indicates identified iconic model with the representative image and described image.
9. a kind of device for handling image, including:
At least one processor is configured to execute the method as described in any of claim 1 to 8.
10. a kind of method of processing image, including:
At least one iconic model is generated by clustering multiple images, wherein each iconic model is by phase each other in described image As image indicate;And
If indicating that the number of the image of an iconic model is more than threshold value, according to the image study for indicating described image model Visual dictionary, and with the visual dictionary described image model is indicated instead of the image of described image model.
CN201710295810.8A 2017-04-28 2017-04-28 Method of processing image and apparatus for processing image Active CN108805148B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710295810.8A CN108805148B (en) 2017-04-28 2017-04-28 Method of processing image and apparatus for processing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710295810.8A CN108805148B (en) 2017-04-28 2017-04-28 Method of processing image and apparatus for processing image

Publications (2)

Publication Number Publication Date
CN108805148A true CN108805148A (en) 2018-11-13
CN108805148B CN108805148B (en) 2022-01-11

Family

ID=64069278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710295810.8A Active CN108805148B (en) 2017-04-28 2017-04-28 Method of processing image and apparatus for processing image

Country Status (1)

Country Link
CN (1) CN108805148B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030230A (en) * 2007-04-18 2007-09-05 北京北大方正电子有限公司 Image searching method and system
US7532756B2 (en) * 2005-01-11 2009-05-12 Fujitsu Limited Grayscale character dictionary generation apparatus
CN102402621A (en) * 2011-12-27 2012-04-04 浙江大学 Image retrieval method based on image classification
CN102855492A (en) * 2012-07-27 2013-01-02 中南大学 Classification method based on mineral flotation foam image
CN103207870A (en) * 2012-01-17 2013-07-17 华为技术有限公司 Method, server, device and system for photo sort management
CN103778146A (en) * 2012-10-23 2014-05-07 富士通株式会社 Image clustering device and method
US8787692B1 (en) * 2011-04-08 2014-07-22 Google Inc. Image compression using exemplar dictionary based on hierarchical clustering
CN104462382A (en) * 2014-12-11 2015-03-25 北京中细软移动互联科技有限公司 Trademark image inquiry method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7532756B2 (en) * 2005-01-11 2009-05-12 Fujitsu Limited Grayscale character dictionary generation apparatus
CN101030230A (en) * 2007-04-18 2007-09-05 北京北大方正电子有限公司 Image searching method and system
US8787692B1 (en) * 2011-04-08 2014-07-22 Google Inc. Image compression using exemplar dictionary based on hierarchical clustering
CN102402621A (en) * 2011-12-27 2012-04-04 浙江大学 Image retrieval method based on image classification
CN103207870A (en) * 2012-01-17 2013-07-17 华为技术有限公司 Method, server, device and system for photo sort management
CN102855492A (en) * 2012-07-27 2013-01-02 中南大学 Classification method based on mineral flotation foam image
CN103778146A (en) * 2012-10-23 2014-05-07 富士通株式会社 Image clustering device and method
CN104462382A (en) * 2014-12-11 2015-03-25 北京中细软移动互联科技有限公司 Trademark image inquiry method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IMTIAZ MASUD ZIKO ET AL: "Supervised spectral subspace clustering for visual dictionary creation in the context of image classification", 《2015 3RD IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR)》 *
张祯伟等: "改进视觉词袋模型的快速图像检索方法", 《计算机系统应用》 *

Also Published As

Publication number Publication date
CN108805148B (en) 2022-01-11

Similar Documents

Publication Publication Date Title
Ghrabat et al. An effective image retrieval based on optimized genetic algorithm utilized a novel SVM-based convolutional neural network classifier
CN110362677B (en) Text data category identification method and device, storage medium and computer equipment
CN108280477B (en) Method and apparatus for clustering images
CN108733778B (en) Industry type identification method and device of object
Saito et al. Robust active learning for the diagnosis of parasites
US9747308B2 (en) Method and apparatus for searching an image, and computer-readable recording medium for executing the method
JP5214760B2 (en) Learning apparatus, method and program
US20120045132A1 (en) Method and apparatus for localizing an object within an image
CN114283350B (en) Visual model training and video processing method, device, equipment and storage medium
CN110751027B (en) Pedestrian re-identification method based on deep multi-instance learning
CN112507912B (en) Method and device for identifying illegal pictures
Reza Zare et al. Automatic classification of medical X‐ray images: hybrid generative‐discriminative approach
Yuan et al. A discriminative representation for human action recognition
CN114463552A (en) Transfer learning and pedestrian re-identification method and related equipment
CN113515593A (en) Topic detection method and device based on clustering model and computer equipment
CN110209895B (en) Vector retrieval method, device and equipment
Mei et al. Supervised segmentation of remote sensing image using reference descriptor
Behnam et al. Optimal query-based relevance feedback in medical image retrieval using score fusion-based classification
CN116151258A (en) Text disambiguation method, electronic device and storage medium
CN108805148A (en) Handle the method for image and the device for handling image
Rao et al. Texture classification using Minkowski distance measure-based clustering for feature selection
Lahrache et al. Bag‐of‐features for image memorability evaluation
CN114821139A (en) Unsupervised pedestrian re-identification method, system, device and storage medium
CN118135357B (en) Core set construction method, device, equipment and medium
CN111625672B (en) Image processing method, image processing device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant