CN106780450A - A kind of image significance detection method based on low-rank Multiscale Fusion - Google Patents

A kind of image significance detection method based on low-rank Multiscale Fusion Download PDF

Info

Publication number
CN106780450A
CN106780450A CN201611110790.4A CN201611110790A CN106780450A CN 106780450 A CN106780450 A CN 106780450A CN 201611110790 A CN201611110790 A CN 201611110790A CN 106780450 A CN106780450 A CN 106780450A
Authority
CN
China
Prior art keywords
image
significance
saliency
fusion
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611110790.4A
Other languages
Chinese (zh)
Inventor
冯伟
孙济洲
黄睿
刘烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201611110790.4A priority Critical patent/CN106780450A/en
Publication of CN106780450A publication Critical patent/CN106780450A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of image significance detection method based on low-rank Multiscale Fusion, its technical characterstic includes:Image to being input into carries out single scale conspicuousness detection;Image after being detected to single scale conspicuousness carries out multiple dimensioned conspicuousness fusion treatment, obtains merging Saliency maps;Conspicuousness micronization processes are carried out to the fusion Saliency maps after multiple dimensioned conspicuousness fusion treatment, final collaboration Saliency maps picture is obtained.Be applied to the method for the conspicuousness detection method recovered based on low-rank matrix and the fusion of multiple dimensioned conspicuousness in conspicuousness detection by the present invention, and by with the collaboration conspicuousness priori based on GMM, the detection of multiple dimensioned low-rank conspicuousness is generalized in multiple image collaboration conspicuousness detection, to detect the same or analogous region occurred in multiple image, solve the problems, such as that scale selection is difficult, more reliable conspicuousness testing result is achieved, helps further to improve the disposal ability of conspicuousness detection.

Description

Image significance detection method based on low-rank multi-scale fusion
Technical Field
The invention belongs to the technical field of computer vision detection, and particularly relates to an image saliency detection method based on low-rank multi-scale fusion.
Background
In the field of computer vision, salient object detection methods are divided into two broad categories, bottom-up scene-driven models and top-down expectation-driven models. The bottom-up approach is mainly based on the scene information of the picture scene, while the top-down approach is determined by knowledge, expectations and objectives. Many significance detection methods have been proposed, such as RC, CA, etc. Most of the significance detection methods are significance detection aiming at single-scale pictures and have achieved good effect. However, these methods have a common problem that when an object is in a natural scene with small scale and large contrast, the salient object in the picture is generally not well detected. For this situation, there are two general solutions, one is to continue to find better salient objects, and the other is to assist in monitoring salient objects by using other pictures that also contain the same salient objects, which is called cooperative salient detection.
The significance detection method based on low-rank matrix recovery is based on the following prior hypothesis: the salient objects are sparse in the whole image, so that one image can be regarded as a background and a plurality of salient objects which are sparsely distributed on the background, the image background has low-rank characteristics, and a natural image is decomposed into a low-rank matrix and a sparse matrix, so that the salient detection is converted into the recovery problem of the low-rank matrix.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides an image significance detection method based on low-rank multi-scale fusion, and solves the problems of difficulty in scale selection and reliability of the conventional detection method.
The technical problem to be solved by the invention is realized by adopting the following technical scheme:
an image significance detection method based on low-rank multi-scale fusion comprises the following steps:
step 1, carrying out single-scale saliency detection on an input image;
step 2, carrying out multi-scale saliency fusion processing on the image subjected to single-scale saliency detection to obtain a fusion saliency map;
and 3, performing significance thinning processing on the fusion significance map subjected to the multi-scale significance fusion processing to obtain a final cooperative significance image.
Further, the specific processing method in step 1 includes the following steps:
the method includes the steps of firstly, over-dividing an image into multi-scale division graphs and extracting features;
carrying out significance prior treatment by adopting a background prior method;
and thirdly, performing significance calculation.
Further, the method of the step comprises: for an input image, the SLIC method is used to segment the input image into superpixels, and 122-dimensional position features, color features and texture features are extracted.
Further, the significance calculation method of the step three is carried out by adopting a significance model as follows:
SP (i) is the saliency value of the ith super pixel,is the saliency value of the jth feature of the ith super-pixel.Is a vector of saliency values for all features of the ith super-pixel.
Further, the specific method of step 2 is as follows: firstly, segmenting an image into different scales; then, calculating a saliency map on each scale; finally, a fused saliency map is computed by multiplying all scales of saliency values by the corresponding adaptive weights.
Further, the adaptive weight is represented as follows:
wherein Z is a partition function;
the fusion significance map is calculated by adopting the following formula:
ω i represents the adaptive weight of the saliency map of the ith scale,the characteristic value of the ith scale is represented,a saliency map after fusion of multiple scales is represented.
Further, the processing method of step 3 includes:
performing smoothness processing on a current image to enable the image to be spatially smooth;
and performing cooperative significance detection on the image.
Further, the method for performing smoothness processing on the current image comprises the following steps: the method is realized by adopting the following energy functions:
wherein S isIRepresenting the saliency value of each super-pixel i,the probability of representing the background is shown,a probability of representing the foreground is represented,nei (i): represents the ith superNeighborhood of pixels, weight ωij: is defined as:
wherein,the L2 distance representing the color mean in the CIE-LAB color space,
further, the step of detecting the cooperative significance of the image comprises the following steps:
① Single salient Point detection for a given series of images Iset={I1,I2,...,InCalculating a single saliency map of each image by SiA single saliency map representing the ith image;
② binary segmentation using an adaptive threshold Ti: partitioning a single saliency map into binary masks Mi,TiIs defined as:
Ti=α·mean(Si)
wherein α is 2;
③ synergistic significance prior estimation GMM algorithm uses 5 Gaussian models to construct color model G for foreground pixels in the ith pictureiThen using the modulus M in the estimated j picturejThe foreground probability of (2); obtaining estimated values of n foreground probabilities for each picture, and then calculating a cooperative significance prior for each picture to obtain an average value of the estimated values;
fourthly, calculating the cooperative significance: and combining the cooperative significance priors into a single significance detection model to obtain a final cooperative significance image.
The invention has the advantages and positive effects that:
the saliency detection method based on low-rank matrix recovery and the multi-scale saliency fusion method are applied to saliency detection, and the multi-scale low-rank saliency detection is popularized to the multi-image cooperative saliency detection by applying the GMM-based cooperative saliency prior to detect the same or similar regions appearing in multiple images.
Drawings
FIG. 1 is a flow chart of an image saliency detection method based on low-rank multi-scale fusion according to the present invention;
FIG. 2 is a schematic diagram of the dimensionality and description of salient features extracted by the present invention;
FIG. 3 is a graph of the comparative effect of the present invention on the performance of the MSRA data set;
FIG. 4 is a graph of the comparative effect of the invention on the performance of an ESCCCD data set;
figure 5 is a comparative graph of the synergistic saliency method performance of the present invention on an image pair dataset.
Detailed Description
The embodiments of the invention will be described in further detail below with reference to the accompanying drawings:
an image saliency detection method based on low-rank multi-scale fusion, as shown in fig. 1, includes the following steps:
step 1, single-scale saliency detection is carried out on an input image. The specific method comprises the following steps:
the method includes the steps of over-dividing an image into multi-scale division graphs and extracting features
For the input image, we use SLIC to segment it into superpixels and extract 122-dimensional features including location, color, texture, as shown in fig. 2. The specific method comprises the following steps: color features of 40 dimensions are extracted, 12 linear pyramids features in 4 directions under 3 dimensions and 36 Gabor features in 12 directions under 3 dimensions, and features of 31 dimensions are extracted by using HOG.
Treatment of significance prior
Currently, several top-down approaches have been used to further improve the performance of significance detection. Representative methods are various significance priors, such as a central prior, an object prior, and a background prior, which are all used to improve the possible existence position of a significant object in an image. In the method, a background prior method is adopted for significance prior processing.
Calculating the significance of the three
Since low rank analysis is helpful for saliency detection, we can divide an image into a redundant part and a saliency part. The redundant part represents high regularity, and the salient part represents novelty. We can express this decomposition as a recovery problem for low rank matrices:
s.t.F=B+S
wherein F ═ F1,f2,...,fn]The feature matrix is composed of N feature vectors, B is a low-rank matrix obtained by modeling a background, and S is a sparse matrix obtained by modeling significance.
Since the above problem is an NP problem, we turn to the following way to solve:
s.t.F=B+S
however, decomposing F in the initial feature space always results in poor object saliency detection. To get a good result, we first learn a transformation matrix T, and by left-multiplying the feature matrix F by T, we obtain a transformed feature matrix TF. In the transformed space, features of the image background exist in a low-dimensional subspace. Therefore, they can be represented as a low rank matrix. The prior P can be updated by P-left multiplying TF, so the final saliency model is:
s.t.TFP=B+S
s is assumed to be the optimal solution of the equation. Then the saliency value sp (i) of the ith super pixel is:
and 2, carrying out multi-scale saliency fusion processing on the image subjected to the single-scale saliency detection to obtain a fusion saliency map.
Since the significance detection effect on a single-scale image may not be ideal, in order to obtain a reliable significance detection result, the multi-scale fusion method of the patent: firstly, an image is segmented into different scales; then, calculating a saliency map on each scale by using the above method; finally, we compute the fused saliency map by multiplying all scales of saliency values by the corresponding adaptive weights.
The saliency value of a super-pixel is the average of all the saliency values contained in this super-pixel region, and we represent the saliency values of all super-pixels on each scale as a row vector, and the saliency values of each super-pixel on all scales form a saliency indication matrix SI. Ideally the significance detection result is consistent across all scales, and therefore the rank of the indication matrix should be 1. We can translate this problem into a recovery problem for low rank matrices:
s.t.SI=L+E
wherein, the optimal solution E represents the difference of the multi-scale significance detection results. We sum the absolute values of the elements of each row in E to obtain a vectorWherein n represents a scale. EiThe larger the number of the significant graphs, the higher the inconsistency between the ith significant graph and the other significant graphs. Therefore, the corresponding saliency map should be given a small weight. The adaptive weight is expressed as:
wherein Z is a partition function. Finally, the fusion saliency map can be calculated with the following formula:
and 3, performing significance thinning processing on the fusion significance map subjected to the multi-scale significance fusion processing to obtain a final significance image. The method specifically comprises the following steps:
(1) the smoothness processing is carried out on the current image, so that the image achieves the spatial smoothness
After completion, we start to consider the smoothness between neighboring superpixels. We use an energy function to optimize the fused saliency map:
wherein S isIRepresenting the saliency value of each super-pixel i,the probability of representing the background is shown,representing the probability of the foreground. (in the method) And, Nei (i): representing the neighborhood of the ith super pixel. Weight ωij: is defined as:
wherein,the L2 distance representing the color mean in the CIE-LAB color space,
the method comprises the following steps of detecting a single salient point, segmenting two values, estimating a collaborative saliency priori and calculating and processing the collaborative saliency, and is described as follows:
detection of single salient point
For a given series of images Iset={I1,I2,...,InCalculating a single saliency map for each image using the method mentioned above, using SiA single saliency map representing the ith image.
Two value segmentation
We use the adaptive threshold Ti: partitioning a single saliency map into binary masks Mi,TiIs defined as:
Ti=α·mean(Si)
where α ═ 2 in our experiments. Pixels or superpixels with a saliency value greater than our given adaptive threshold are foreground, otherwise they are background.
(iii) collaborative significance prior estimation
We use GMM to obtain a synergistic significance prior by: GMM algorithm uses 5 Gaussian models to build a color model G for the foreground pixels in the ith pictureiThen using the modulus M in the estimated j picturejThe foreground probability of. For each picture, n estimated values of the foreground probability are obtained, and then a cooperative significance prior is calculated for each picture to obtain an average value of the estimated values.
Fourthly, calculating the cooperative significance
Finally, we combine the co-saliency priors into a single saliency detection model to obtain the final co-saliency image.
Through the steps, the function of detecting the salient object by analyzing the multi-scale superpixel with low rank can be realized.
FIG. 3 is a graph of the comparative effect of the present invention on the MSRA data set, from which it can be seen that the PR curve and the ROC curve of our method are both optimal, MAE is minimum, and AUC is highest on the data set compared to the prior art; FIG. 4 is a graph of the comparative effect of the performance of the present invention on an ESCCCD data set, from which it can be seen that the PR curve and the ROC curve of our method are both optimal, MAE is minimum, and AUC is highest on the data set compared to the prior art; FIG. 5 is a comparison graph of the performance of the synergistic saliency method of the present invention on an image pair dataset, from which it can be seen that the fmeasure, precision and recall of our method are the highest and the MAE is the smallest on this dataset compared to the prior art; therefore, compared with the prior art, the detection effect of the method is obviously improved on different data sets.
It should be emphasized that the embodiments described herein are illustrative rather than restrictive, and thus the present invention is not limited to the embodiments described in the detailed description, but also includes other embodiments that can be derived from the technical solutions of the present invention by those skilled in the art.

Claims (9)

1. An image significance detection method based on low-rank multi-scale fusion is characterized by comprising the following steps:
step 1, carrying out single-scale saliency detection on an input image;
step 2, carrying out multi-scale saliency fusion processing on the image subjected to single-scale saliency detection to obtain a fusion saliency map;
and 3, performing significance thinning processing on the fusion significance map subjected to the multi-scale significance fusion processing to obtain a final cooperative significance image.
2. The method for detecting the image significance based on the low-rank multi-scale fusion as claimed in claim 1, wherein: the specific processing method of the step 1 comprises the following steps:
the method includes the steps of firstly, over-dividing an image into multi-scale division graphs and extracting features;
carrying out significance prior treatment by adopting a background prior method;
and thirdly, performing significance calculation.
3. The method for detecting the image significance based on the low-rank multi-scale fusion as claimed in claim 2, wherein: the method of the step comprises the following steps: for an input image, the SLIC method is used to segment the input image into superpixels, and 122-dimensional position features, color features and texture features are extracted.
4. The method for detecting the image significance based on the low-rank multi-scale fusion as claimed in claim 2, wherein: the significance calculation method of the step three is carried out by adopting a significance model as follows:
S P ( i ) = | | S ^ ( : , i ) | | 2 Σ i | | S ^ ( : , i ) | | 2 = Σ j ( S ^ ( j , i ) ) 2 Σ i Σ j ( S ^ ( j , i ) ) 2
SP (i) is the saliency value of the ith super pixel,is the saliency value of the jth feature of the ith super-pixel,is a vector of saliency values for all features of the ith super-pixel.
5. The method for detecting the image significance based on the low-rank multi-scale fusion as claimed in claim 1, wherein: the specific method of the step 2 comprises the following steps: firstly, segmenting an image into different scales; then, calculating a saliency map on each scale; finally, a fused saliency map is computed by multiplying all scales of saliency values by the corresponding adaptive weights.
6. The method for detecting the saliency of images based on low-rank multi-scale fusion as claimed in claim 5, wherein: the adaptive weights are represented as follows:
wherein Z is a partition function;
the fusion significance map is calculated by adopting the following formula:
S m a p f u s e = Σ ω i * S m a p i
ωithe adaptive weights of the saliency map representing the ith scale,the characteristic value of the ith scale is represented,a saliency map after fusion of multiple scales is represented.
7. The method for detecting the image significance based on the low-rank multi-scale fusion as claimed in claim 1, wherein: the processing method of the step 3 comprises the following steps:
performing smoothness processing on a current image to enable the image to be spatially smooth;
and performing cooperative significance detection on the image.
8. The method for detecting the saliency of images based on low-rank multi-scale fusion as claimed in claim 7, wherein: the method for performing the smoothness processing on the current image comprises the following steps: the method is realized by adopting the following energy functions:
E = Σ i ω i b g * s i 2 + Σ i ω i f g ( s i - 1 ) 2 + Σ i , j ∈ N e i ( i ) ω i j ( s i - s j ) 2
wherein S islRepresenting the saliency value of each super-pixel i,the probability of representing the background is shown,a probability of representing the foreground is represented,nei (i) represents the neighborhood of the ith superpixel, weight ωij: is defined as:
wherein,l2 distance, 6 ═ 10, representing the mean of the colors in the CIE-LAB color space.
9. The method for detecting the saliency of images based on low-rank multi-scale fusion as claimed in claim 7, wherein: the method for detecting the collaborative saliency of the image comprises the following steps:
① Single salient Point detection for a given series of images Iset={I1,I2,...,InCalculating a single saliency map of each image by SiA single saliency map representing the ith image;
② binary segmentation using adaptationThreshold value Ti: partitioning a single saliency map into binary masks Mi,TiIs defined as:
Ti=α·mean(Si)
wherein α is 2;
③ synergistic significance prior estimation GMM algorithm uses 5 Gaussian models to construct color model G for foreground pixels in the ith pictureiThen using the modulus M in the estimated j picturejThe foreground probability of (2); obtaining estimated values of n foreground probabilities for each picture, and then calculating a cooperative significance prior for each picture to obtain an average value of the estimated values;
fourthly, calculating the cooperative significance: and combining the cooperative significance priors into a single significance detection model to obtain a final cooperative significance image.
CN201611110790.4A 2016-12-06 2016-12-06 A kind of image significance detection method based on low-rank Multiscale Fusion Pending CN106780450A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611110790.4A CN106780450A (en) 2016-12-06 2016-12-06 A kind of image significance detection method based on low-rank Multiscale Fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611110790.4A CN106780450A (en) 2016-12-06 2016-12-06 A kind of image significance detection method based on low-rank Multiscale Fusion

Publications (1)

Publication Number Publication Date
CN106780450A true CN106780450A (en) 2017-05-31

Family

ID=58874396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611110790.4A Pending CN106780450A (en) 2016-12-06 2016-12-06 A kind of image significance detection method based on low-rank Multiscale Fusion

Country Status (1)

Country Link
CN (1) CN106780450A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527348A (en) * 2017-07-11 2017-12-29 湖州师范学院 Conspicuousness detection method based on multi-scale division
CN107909078A (en) * 2017-10-11 2018-04-13 天津大学 Conspicuousness detection method between a kind of figure
CN108437933A (en) * 2018-02-10 2018-08-24 深圳智达机械技术有限公司 A kind of vehicle startup system
CN108549891A (en) * 2018-03-23 2018-09-18 河海大学 Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN108961196A (en) * 2018-06-21 2018-12-07 华中科技大学 A kind of 3D based on figure watches the conspicuousness fusion method of point prediction attentively
CN109325507A (en) * 2018-10-11 2019-02-12 湖北工业大学 A kind of image classification algorithms and system of combination super-pixel significant characteristics and HOG feature
CN109978819A (en) * 2019-01-22 2019-07-05 安徽海浪智能技术有限公司 A method of segmentation retinal vessel is detected based on low scale blood vessel
CN116994006A (en) * 2023-09-27 2023-11-03 江苏源驶科技有限公司 Collaborative saliency detection method and system for fusing image saliency information

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390279A (en) * 2013-07-25 2013-11-13 中国科学院自动化研究所 Target prospect collaborative segmentation method combining significant detection and discriminant study
CN103700091A (en) * 2013-12-01 2014-04-02 北京航空航天大学 Image significance object detection method based on multiscale low-rank decomposition and with sensitive structural information
CN103914834A (en) * 2014-03-17 2014-07-09 上海交通大学 Significant object detection method based on foreground priori and background priori
CN104392463A (en) * 2014-12-16 2015-03-04 西安电子科技大学 Image salient region detection method based on joint sparse multi-scale fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390279A (en) * 2013-07-25 2013-11-13 中国科学院自动化研究所 Target prospect collaborative segmentation method combining significant detection and discriminant study
CN103700091A (en) * 2013-12-01 2014-04-02 北京航空航天大学 Image significance object detection method based on multiscale low-rank decomposition and with sensitive structural information
CN103914834A (en) * 2014-03-17 2014-07-09 上海交通大学 Significant object detection method based on foreground priori and background priori
CN104392463A (en) * 2014-12-16 2015-03-04 西安电子科技大学 Image salient region detection method based on joint sparse multi-scale fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RUI HUANG 等: ""SALIENCY AND CO-SALIENCY DETECTION BY LOW-RANK MULTISCALE FUSION"", 《2015 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527348B (en) * 2017-07-11 2020-10-30 湖州师范学院 Significance detection method based on multi-scale segmentation
CN107527348A (en) * 2017-07-11 2017-12-29 湖州师范学院 Conspicuousness detection method based on multi-scale division
CN107909078A (en) * 2017-10-11 2018-04-13 天津大学 Conspicuousness detection method between a kind of figure
CN107909078B (en) * 2017-10-11 2021-04-16 天津大学 Inter-graph significance detection method
CN108437933A (en) * 2018-02-10 2018-08-24 深圳智达机械技术有限公司 A kind of vehicle startup system
CN108437933B (en) * 2018-02-10 2021-06-08 聊城市敏锐信息科技有限公司 Automobile starting system
CN108549891A (en) * 2018-03-23 2018-09-18 河海大学 Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN108549891B (en) * 2018-03-23 2019-10-01 河海大学 Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN108961196A (en) * 2018-06-21 2018-12-07 华中科技大学 A kind of 3D based on figure watches the conspicuousness fusion method of point prediction attentively
CN108961196B (en) * 2018-06-21 2021-08-20 华中科技大学 Significance fusion method for 3D fixation point prediction based on graph
CN109325507B (en) * 2018-10-11 2020-10-16 湖北工业大学 Image classification method and system combining super-pixel saliency features and HOG features
CN109325507A (en) * 2018-10-11 2019-02-12 湖北工业大学 A kind of image classification algorithms and system of combination super-pixel significant characteristics and HOG feature
CN109978819A (en) * 2019-01-22 2019-07-05 安徽海浪智能技术有限公司 A method of segmentation retinal vessel is detected based on low scale blood vessel
CN116994006A (en) * 2023-09-27 2023-11-03 江苏源驶科技有限公司 Collaborative saliency detection method and system for fusing image saliency information
CN116994006B (en) * 2023-09-27 2023-12-08 江苏源驶科技有限公司 Collaborative saliency detection method and system for fusing image saliency information

Similar Documents

Publication Publication Date Title
CN106780450A (en) A kind of image significance detection method based on low-rank Multiscale Fusion
CN108256562B (en) Salient target detection method and system based on weak supervision time-space cascade neural network
CN105243670B (en) A kind of sparse and accurate extracting method of video foreground object of low-rank Combined expression
CN111340824B (en) Image feature segmentation method based on data mining
CN110503613B (en) Single image-oriented rain removing method based on cascade cavity convolution neural network
CN112347861B (en) Human body posture estimation method based on motion feature constraint
CN103745468B (en) Significant object detecting method based on graph structure and boundary apriority
CN101477690B (en) Method and device for object contour tracking in video frame sequence
JP5766620B2 (en) Object region detection apparatus, method, and program
CN110458192B (en) Hyperspectral remote sensing image classification method and system based on visual saliency
CN111046868B (en) Target significance detection method based on matrix low-rank sparse decomposition
CN104966286A (en) 3D video saliency detection method
CN104537686B (en) Tracking and device based on target space-time consistency and local rarefaction representation
CN107506792B (en) Semi-supervised salient object detection method
US20160078634A1 (en) Methods and systems for image matting and foreground estimation based on hierarchical graphs
CN103093470A (en) Rapid multi-modal image synergy segmentation method with unrelated scale feature
CN105654475A (en) Image saliency detection method and image saliency detection device based on distinguishable boundaries and weight contrast
CN104657951A (en) Multiplicative noise removal method for image
CN108830320B (en) Hyperspectral image classification method based on identification and robust multi-feature extraction
CN111539434B (en) Infrared weak and small target detection method based on similarity
CN113421210A (en) Surface point cloud reconstruction method based on binocular stereo vision
CN109101978B (en) Saliency target detection method and system based on weighted low-rank matrix recovery model
Takahashi et al. Rank minimization approach to image inpainting using null space based alternating optimization
CN110136164B (en) Method for removing dynamic background based on online transmission transformation and low-rank sparse matrix decomposition
CN108765384B (en) Significance detection method for joint manifold sequencing and improved convex hull

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170531