CN109035296A - A kind of improved moving objects in video detection method - Google Patents
A kind of improved moving objects in video detection method Download PDFInfo
- Publication number
- CN109035296A CN109035296A CN201810686019.4A CN201810686019A CN109035296A CN 109035296 A CN109035296 A CN 109035296A CN 201810686019 A CN201810686019 A CN 201810686019A CN 109035296 A CN109035296 A CN 109035296A
- Authority
- CN
- China
- Prior art keywords
- pixel
- background
- sample
- threshold value
- template
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 21
- 238000000034 method Methods 0.000 claims abstract description 32
- 230000011218 segmentation Effects 0.000 claims abstract description 11
- 238000005070 sampling Methods 0.000 claims description 7
- 230000002123 temporal effect Effects 0.000 claims description 5
- 238000012360 testing method Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of improved moving objects in video detection method, step includes: that 1) background template initialization, the neighborhood pixel of statistical pixel x are repetitively sampled to the number of background sample collection;The statistics number of certain pixel is more than 3 times, if the pixel is collected again, directly the pixel of x position is saved in template samples, no longer saves the pixel to background template;2) pixel classifications obtain difference image D (x, y) using frame differential method using dynamic cataloging threshold value R, the segmentation threshold t of difference image are calculated using maximum kind differences method, threshold value t is as dynamic cataloging threshold value R;3) background model updates;4) foreground pixel point life long is added.Method of the invention, the moving object contours of detection are more complete clear." ghost " phenomenon can effectively be inhibited.
Description
Technical field
The invention belongs to Video Analysis Technology fields, are related to a kind of improved moving objects in video detection method.
Background technique
Computer visual development is rapid in modern science and technology, and moving object segmentation becomes the hot research side of computer vision
To.Moving object segmentation is widely used, and has application in space flight, intelligent transportation, safety monitoring etc..Such as in intelligent monitoring, prison
Control system can analyze the video of acquisition, the personage in video is detected automatically, sorts out or warn etc., it can be reduced people
The consumption of power, material resources, improves the quality of monitoring, therefore studies robustness good detection method is of great significance.
Common moving object segmentation algorithm includes frame differential method, optical flow method and background subtraction.Frame differential method is former
Reason is simple, but there are clearly disadvantageous, can only exactly extract the boundary of moving object, cannot obtain the complete section of moving object
Domain.Optical flow method needs to solve optical flow equation in practical applications, and needs to meet certain assumed condition, and calculation amount is bigger,
Therefore it applies less.Background subtraction is then to establish background image, matches to obtain moving region by present frame and background model,
Moving object can be accurately detected, but more demanding to background modeling, should accurately construct background model and required at any time certainly
It is dynamic to update.
Summary of the invention
The purpose of the present invention is to propose to a kind of improved moving objects in video detection methods, and solving can only in former technology
The boundary for extracting moving object, cannot obtain the complete area of moving object or calculation amount is bigger or background modeling is wanted
Seek higher problem.
The technical scheme adopted by the invention is that a kind of improved moving objects in video detection method, according to following step
It is rapid to implement:
Step 1: background template initialization
When pixel to be detected is matched with background template, background dot is determined that it is, is otherwise motor point,
N frame sky background sample collection is initially set up, the sky background sample collection is in the same size with original image, and stochastical sampling is saved in
The corresponding background sample collection of each pixel;
The pixel value of pixel x, v are indicated using v (x)iIndicate i-th of position background sample, then the n background of position x
Sample set is M (x), wherein M (x)={ v1, v2..., vn};
During background model initializing, the neighborhood pixel of statistical pixel x is repetitively sampled to background sample collection
Number;The statistics number of certain pixel is more than 3 times, if the pixel is collected again, the pixel of x position is directly saved in mould
In plate sample, no longer the pixel is saved to background template.
Step 2: pixel classifications
Using dynamic cataloging threshold value R, difference image D (x, y) is obtained using frame differential method, uses maximum kind differences method meter
The segmentation threshold t of difference image is calculated, threshold value t is as dynamic cataloging threshold value R;
Step 3: background model updates
When pixel v (x) is judged as background pixel, then a sample value in sample M (x) is randomly choosed, by the sample
Value is replaced with v (x);In order to keep pixel neighborhoods Space Consistency, when the background sample to v (x) updates, with same method pair
The background sample of v (x) neighborhood territory pixel is updated;
Step 4: foreground pixel point life long is added
The pixel having determined that one as foreground pixel saves the temporal information of the pixel;If the time of the pixel
Information is more than the time limit of setting, it is believed that the pixel is that foreground pixel is judged to by mistake, then be added into background it is random more
Among new.
The solution have the advantages that:
It 1) is to ViBe (visual background extractor) algorithm the present invention is based on the thought of ViBe algorithm
Improvement, establish model using first frame image in image sequence, therefore can quickly detect moving object.By being carried out to former algorithm
3 points of improvement, using the ViBe algorithm after improvement, the moving object contours of detection are more complete clear.
2) ViBe algorithm regards object detection as classification problem, for pixel each in present image, judges a pixel
It is foreground pixel or background pixel, mainly according to the intersection number of the pixel and sample set.Simultaneously because long using pixel life
Degree mechanism can effectively inhibit " ghost " phenomenon.
3) the method for the present invention carries out post-processing to the target of detection using median filtering, closed operation operation, makes detectable substance
Body profile is complete, and reduces ambient noise point.
Detailed description of the invention
Fig. 1 is 8 neighborhood sample ranges of original ViBe algorithm;
Fig. 2 is the 20 neighborhood sample ranges that the improved ViBe algorithm of the present invention uses;
Fig. 3 is original ViBe algorithm pixel classifications principle;
Fig. 4 is to improve ViBe algorithm pixel classifications principle;
Fig. 5 is the 942nd frame number of repetition Statistical Comparison of PETS2006, and wherein Fig. 5 a is original image, and Fig. 5 b is original ViBe algorithm
Statistical disposition is as a result, Fig. 5 c is the improved ViBe algorithm statistical disposition result of the present invention;
Fig. 6 is the 420th frame image detection of Highway as a result, wherein Fig. 6 a is original image, and Fig. 6 b is original ViBe algorithm process knot
Fruit, Fig. 6 c are the improved ViBe algorithm process result of the present invention;
Fig. 7 is the 106th picture frame testing result of Intelligentroom, and wherein Fig. 7 a is original image, and Fig. 7 b is original ViBe calculation
Method processing result, Fig. 7 c are the improved ViBe algorithm process result of the present invention;
Fig. 8 is the 110th frame image detection of Laboratory as a result, wherein Fig. 8 a is original image, and Fig. 8 b is at original ViBe algorithm
Reason is as a result, Fig. 8 c is the improved ViBe algorithm process result of the present invention;
Fig. 9 is the 655th frame image detection of PETS2006 as a result, wherein Fig. 9 a is original image, and Fig. 9 b is original ViBe algorithm process
As a result, Fig. 9 c is the improved ViBe algorithm process result of the present invention;
Figure 10 is that the 171st frame " ghost " of HighwayII eliminates comparison, and wherein Figure 10 a is original image, and Figure 10 b is original ViBe calculation
Method processing result, Figure 10 c are the improved ViBe algorithm process result of the present invention.
Specific embodiment
The present invention is described in detail with embodiment with reference to the accompanying drawing.
Video image in all attached drawings of the present invention derives from standard movement object detection collection (disclosed video data
Library).
Former ViBe algorithm mainly includes three parts:
1) background model initializing.ViBe algorithm carries out background model initializing using video first frame image.It is carried on the back establishing
When scape model, n frame sky background sample collection is initially set up, the sky background sample collection size is consistent with original video image.As shown in Figure 1
For the eight neighborhood model schematic of pixel.
From the eight neighborhood of each pixel of first frame image, randomly chooses pixel and be saved in sky background sample concentration, wherein v
(x) pixel x gray value in image is indicated;Using viIndicate i-th of sample, n background sample collection is expressed as M (x) at x position
={ v1, v2..., vn}。
2) pixel classifications.
It is illustrated in figure 3 former ViBe algorithm pixel classifications schematic diagram.Define SR(v (x)) is indicated centered on v (x), radius
For the sphere of R;The Euclidean distance of n sample in v (x) and M (x) is calculated, if what the intersection number satisfaction of v (x) and M (x) gave
Threshold number then determines that v (x) is background pixel, otherwise determines that v (x) is foreground pixel.
3) background model updates.When updating the background model, when v (x) is judged as background pixel, sample M is randomly choosed
(x) sample value in replaces sample value v (x);To guarantee neighborhood of pixels Space Consistency, in the back to v (x)
When scape Sample Refreshment, it is updated with background sample of the same procedure to v (x) neighborhood territory pixel.
The method of the present invention has mainly carried out the improvement of three aspects to former ViBe algorithm, that is, improves 1, initialize to background template
Improvement;2 are improved, enlarged sample sample range;3 are improved, fixed threshold is replaced using dynamic threshold.
Improve 1 particular content are as follows:
In former ViBe algorithm, background model is established using video first frame image, repeats to adopt at random from the eight neighborhood of x position
Sample 20 times, sampled result is saved in the sample set of background model.Since former algorithm background sample collection is eight neighbours from x position
Domain pixel stochastical sampling, it is likely that there are certain pixels and is repetitively sampled repeatedly, and what certain pixels were not sampled then completely
Phenomenon causes initiate background model sample to repeat, cannot accurately indicate the background model of x position.Due to randomness, some picture
Element may be repeatedly saved in the background sample space of x position, it is also possible to which the background sample for being repeatedly saved to x neighborhood position is empty
Between, such as the position x+1, cause the repetition for initializing pixel in template to be chosen.
In former ViBe algorithm background model initializing sample collection number statistics is added, principal statistical is at x in the present invention
The neighborhood set, some pixel are repeated the number for being saved in background sample collection.When the statistics number of some pixel is more than 3 times, if
Continue collecting sample, which is collected again, then directly the pixel of x position is saved in template samples, and no longer will
The pixel is saved to background sample and is concentrated.
Improve 2 particular content are as follows:
In former ViBe algorithm when background model initializing, the stochastical sampling from the eight neighborhood of x position, i.e., from 9 pixels
Repeated sampling 20 times, it will cause sample in this way and repeat to choose, in pixel classifications, mistake classification probability rises.
After the present invention improves, is sampled from 20 neighborhoods of v (x), be illustrated in figure 2 20 neighbourhood models.From 21 pixels with
Machine samples 20 times, effectively avoids pixel repeated acquisition, and reduce memory and calculation amount.Sample repeats to be selected into background model,
Although can eliminate this influence with time continuous updating model, in initial detection, the standard of detection can be seriously affected
True property.Therefore the way for expanding neighborhood is conducive to improve the accuracy of moving object segmentation.
Improve 3 particular content are as follows:
When classifying in former ViBe algorithm to a pixel v (x), according to sphere SRThe friendship of (v (x)) and background sample collection M (x)
Collect number to determine.When the number that seeks common ground, sphere SRThe radius R of (v (x)) is that value is R in fixed value, such as original ViBe algorithm
=20, that is, judge the threshold value of v (x) and background sample collection M (x) intersection for definite value, as shown in Figure 3.But in real life, video
Middle background is complicated and changeable, and the fixed threshold R rule of thumb set is not able to satisfy all occasions, will cause the mistake point of v (x)
Class causes moving object segmentation inaccurate.Therefore need to improve the fixed threshold of former ViBe algorithm, so that algorithm adapts to
Multi-motion scene.
After the present invention improves, using dynamic threshold R, difference image D (x, y) is obtained using frame differential method, uses maximum
Class differences method calculates the segmentation threshold t of difference image, using threshold value t as dynamic cataloging threshold value R.Indicate current using F (x, y)
Frame image, f (x, y) indicate previous frame image, and the D (x, y) in the method for the present invention is obtained by F (x, y) and f (x, y) as difference.
Based on above-mentioned principle analysis, the improved moving objects in video detection method of the present invention is real according to the following steps
It applies:
Step 1: background template initialization
Step of the present invention is the process of initial background template, when pixel to be detected is matched with background template, determines it
It is otherwise motor point for background dot.
N frame sky background sample collection is initially set up, the sky background sample collection and original image are in the same size, in this step n value
20 are taken, as shown in Fig. 2, stochastical sampling is saved in the corresponding background sample of each pixel from 21 neighborhoods of first frame image each pixel
This collection;
The pixel value of pixel x, v are indicated using v (x)iIndicate i-th of position background sample, then the n background of position x
Sample set is M (x), wherein M (x)={ v1, v2..., vn};
During background model initializing, the neighborhood pixel of statistical pixel x is repetitively sampled to background sample collection
Number;The statistics number of certain pixel is more than 3 times, if the pixel is collected again, the pixel of x position is directly saved in mould
In plate sample, no longer the pixel is saved to background template.
Fig. 5 a indicates original image, and Fig. 5 b is the testing result for not adding number of repetition statistics, and Fig. 5 c expression present invention is improved to be added
Enter the testing result of number of repetition statistics.It is detected from the 932nd frame of PETS2006 data set as first frame, sample is not added and adopts
When collection number judges, in the 942nd frame and the sport figure that gray scales is remembered is not detected, as shown in Figure 5 b.When according to the present invention
When improved addition sample collection number counts, it can establish more actually background sample collection, can detect grey in the 942nd frame
Sport figure's general profile of label, as shown in Figure 5 c.It is compared by experimental result it is found that when sample number of repetition statistics is added
When, can quickly, really set up background model so that testing result is accurate, for from the farther away moving target of camera lens
Can it is accurate, quickly detect and.
Step 2: pixel classifications
If Fig. 2 is original ViBe algorithm pixel classifications principle, S is definedR(v (x)) is indicated centered on v (x), and threshold value R is half
The sphere of diameter.The intersection number of n sample in v (x) and M (x) is calculated, if the intersection number of v (x) and M (x) is greater than 2, determines v
(x) it is background pixel, otherwise determines that v (x) is foreground pixel;
Step of the present invention replaces the fixed cluster threshold value R of original ViBe algorithm using dynamic cataloging threshold value R, utilizes inter-frame difference
Method obtains difference image D (x, y), the segmentation threshold t of difference image is calculated using maximum kind differences method, threshold value t is as dynamic
Classification thresholds R.
If Fig. 4 is pixel classifications principle in step of the present invention, solid line circle indicates the dynamic cataloging threshold value R calculated in figure.
It can be seen from figure 4 that the intersection number of v (x) and M (x) are 3, and use original ViBe algorithm v when using dynamic cataloging threshold value
It (x) is 2 with the intersection number of M (x).
Fig. 6, Fig. 7, Fig. 8, Fig. 9 are 4 groups of experimental results, and wherein Fig. 6 a, Fig. 7 a, Fig. 8 a, Fig. 9 a indicate original image, Fig. 6 b,
Fig. 7 b, Fig. 8 b, Fig. 9 b are the corresponding original ViBe algorithm testing result of respective original image, and Fig. 6 c, Fig. 7 c, Fig. 8 c, Fig. 9 c are each
From the improved ViBe algorithm testing result of the corresponding present invention of original image.According to the experimental results, former ViBe algorithm is moved in detection
When target, due to using fixed cluster threshold value R, in classification may by certain pixel classifications mistakes, Fig. 6 b, Fig. 7 b, Fig. 8 b,
Being clearly visible in background in Fig. 9 b has a large amount of white pixel points, then is the foreground pixel point for being classified mistake.Step of the present invention is
Dynamic threshold is obtained using frame differential method, can more preferably indicate the segmentation threshold of current frame image prospect, background, therefore the present invention walks
Rapid effect is more preferable, such as Fig. 6 c, Fig. 7 c, Fig. 8 c, Fig. 9 c, detects profile and is more clear, testing result clean background.In addition, seeing
Two groups of experimental results of Laboratory and PETS2006 are examined, also it is obvious that the contour of object detected after improving is more quasi-
Really, ambient noise is few.
Step 3: background model updates
Former ViBe algorithm when updating the background model, when pixel v (x) is judged as background pixel, then randomly chooses sample M
(x) sample value in replaces sample value v (x);In order to keep pixel neighborhoods Space Consistency, to v's (x)
When background sample updates, it is updated with background sample of the same method to v (x) neighborhood territory pixel.
Step 4: foreground pixel point life long is added
" foreground pixel point life long " is added to former ViBe algorithm, which is to have determined that one as foreground pixel
Pixel saves the temporal information of the pixel;If the temporal information of the pixel is more than the time limit of setting, it is believed that the pixel
It is to be judged to foreground pixel by mistake, then is added among the randomly updating of background.
Figure 10 is to eliminate " ghost " Experimental comparison results.Figure 10 a is the 171st frame original image of video High way II.It is former
When detecting moving object, " ghost " that first frame generates also is clearly present ViBe algorithm, and the 171st frame is handled as shown in fig. lob
As a result, " ghost " that the exactly original ViBe algorithm that grey rectangle frame is marked is not eliminated.The improved ViBe algorithm of the present invention by
In prospect of the application pixel life long, in 171 frame, substantially eliminated completely by the ghost that first frame generates, such as Figure 10 c
Shown in the 171st frame processing result, " ghost " phenomenon has been not present in grey box shown in Figure 10 b.
Claims (2)
1. a kind of improved moving objects in video detection method, which is characterized in that follow the steps below to implement:
Step 1: background template initialization
When pixel to be detected is matched with background template, background dot is determined that it is, is otherwise motor point,
N frame sky background sample collection is initially set up, the sky background sample collection is in the same size with original image, and stochastical sampling is saved in each
The corresponding background sample collection of pixel;
The pixel value of pixel x, v are indicated using v (x)iIndicate i-th of position background sample, then the n background sample collection of position x
For M (x), wherein M (x)={ v1, v2..., vn};
During background model initializing, the neighborhood pixel of statistical pixel x is repetitively sampled to the number of background sample collection;
The statistics number of certain pixel is more than 3 times, if the pixel is collected again, the pixel of x position is directly saved in template sample
In this, no longer the pixel is saved to background template;
Step 2: pixel classifications
Using dynamic cataloging threshold value R, difference image D (x, y) is obtained using frame differential method, it is poor to calculate using maximum kind differences method
The segmentation threshold t of partial image, threshold value t are as dynamic cataloging threshold value R;
Step 3: background model updates
When pixel v (x) is judged as background pixel, then a sample value in sample M (x) is randomly choosed, by sample value v
(x) it replaces;In order to keep pixel neighborhoods Space Consistency, when the background sample to v (x) updates, with same method to v (x)
The background sample of neighborhood territory pixel is updated;
Step 4: foreground pixel point life long is added
The pixel having determined that one as foreground pixel saves the temporal information of the pixel;If the temporal information of the pixel
More than the time limit of setting, it is believed that the pixel is to be judged to foreground pixel by mistake, then be added into background randomly updates it
In.
2. improved moving objects in video detection method according to claim 1, it is characterised in that: the step 1
In, n value takes 20, i.e., from 21 neighborhood stochastical samplings of each pixel of first frame image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810686019.4A CN109035296A (en) | 2018-06-28 | 2018-06-28 | A kind of improved moving objects in video detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810686019.4A CN109035296A (en) | 2018-06-28 | 2018-06-28 | A kind of improved moving objects in video detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109035296A true CN109035296A (en) | 2018-12-18 |
Family
ID=65520630
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810686019.4A Pending CN109035296A (en) | 2018-06-28 | 2018-06-28 | A kind of improved moving objects in video detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109035296A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109859236A (en) * | 2019-01-02 | 2019-06-07 | 广州大学 | Mobile object detection method, calculates equipment and storage medium at system |
CN110060278A (en) * | 2019-04-22 | 2019-07-26 | 新疆大学 | The detection method and device of moving target based on background subtraction |
CN110084129A (en) * | 2019-04-01 | 2019-08-02 | 昆明理工大学 | A kind of river drifting substances real-time detection method based on machine vision |
CN110111361A (en) * | 2019-04-22 | 2019-08-09 | 湖北工业大学 | A kind of moving target detecting method based on multi-threshold self-optimizing background modeling |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2015252A1 (en) * | 2007-07-08 | 2009-01-14 | Université de Liège | Visual background extractor |
CN108198205A (en) * | 2017-12-22 | 2018-06-22 | 湖南源信光电科技股份有限公司 | A kind of method for tracking target based on Vibe and Camshift algorithms |
-
2018
- 2018-06-28 CN CN201810686019.4A patent/CN109035296A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2015252A1 (en) * | 2007-07-08 | 2009-01-14 | Université de Liège | Visual background extractor |
CN108198205A (en) * | 2017-12-22 | 2018-06-22 | 湖南源信光电科技股份有限公司 | A kind of method for tracking target based on Vibe and Camshift algorithms |
Non-Patent Citations (3)
Title |
---|
余烨 等: "《EVibe:一种改进的Vibe运动目标检测算法》", 《仪器仪表学报》 * |
阚卫东 等: "《改进的ViBe算法及其在交通视频处理中的应用》", 《光学精密工程》 * |
马维元 等: "《基于改进OTSU法的运动目标检测与跟踪》", 《电子测量与仪器学报》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109859236A (en) * | 2019-01-02 | 2019-06-07 | 广州大学 | Mobile object detection method, calculates equipment and storage medium at system |
CN110084129A (en) * | 2019-04-01 | 2019-08-02 | 昆明理工大学 | A kind of river drifting substances real-time detection method based on machine vision |
CN110060278A (en) * | 2019-04-22 | 2019-07-26 | 新疆大学 | The detection method and device of moving target based on background subtraction |
CN110111361A (en) * | 2019-04-22 | 2019-08-09 | 湖北工业大学 | A kind of moving target detecting method based on multi-threshold self-optimizing background modeling |
CN110060278B (en) * | 2019-04-22 | 2023-05-12 | 新疆大学 | Method and device for detecting moving target based on background subtraction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102542289B (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
CN104392468B (en) | Based on the moving target detecting method for improving visual background extraction | |
CN108288033B (en) | A kind of safety cap detection method based on random fern fusion multiple features | |
CN105404847B (en) | A kind of residue real-time detection method | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
CN108985169B (en) | Shop cross-door operation detection method based on deep learning target detection and dynamic background modeling | |
CN107292252B (en) | Identity recognition method for autonomous learning | |
CN110599523A (en) | ViBe ghost suppression method fused with interframe difference method | |
CN109035296A (en) | A kind of improved moving objects in video detection method | |
CN106991686B (en) | A kind of level set contour tracing method based on super-pixel optical flow field | |
CN103413444A (en) | Traffic flow surveying and handling method based on unmanned aerial vehicle high-definition video | |
CN105513066B (en) | It is a kind of that the generic object detection method merged with super-pixel is chosen based on seed point | |
CN108648211A (en) | A kind of small target detecting method, device, equipment and medium based on deep learning | |
CN102346854A (en) | Method and device for carrying out detection on foreground objects | |
CN108734200B (en) | Human target visual detection method and device based on BING (building information network) features | |
CN105513053A (en) | Background modeling method for video analysis | |
Su et al. | A new local-main-gradient-orientation HOG and contour differences based algorithm for object classification | |
CN112464933A (en) | Intelligent recognition method for small dim target of ground-based staring infrared imaging | |
CN102314591B (en) | Method and equipment for detecting static foreground object | |
CN108765463B (en) | Moving target detection method combining region extraction and improved textural features | |
Li et al. | Automatic human spermatozoa detection in microscopic video streams based on OpenCV | |
CN103413149A (en) | Method for detecting and identifying static target in complicated background | |
CN108710879B (en) | Pedestrian candidate region generation method based on grid clustering algorithm | |
CN108073940A (en) | A kind of method of 3D object instance object detections in unstructured moving grids | |
Koniar et al. | Machine vision application in animal trajectory tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181218 |
|
RJ01 | Rejection of invention patent application after publication |