CN108198208B - Movement detection method based on target tracking - Google Patents
Movement detection method based on target tracking Download PDFInfo
- Publication number
- CN108198208B CN108198208B CN201711444190.6A CN201711444190A CN108198208B CN 108198208 B CN108198208 B CN 108198208B CN 201711444190 A CN201711444190 A CN 201711444190A CN 108198208 B CN108198208 B CN 108198208B
- Authority
- CN
- China
- Prior art keywords
- moving
- image
- range
- area
- height
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention is suitable for the technical field of video camera monitoring and video image processing, and provides a movement detection method based on target tracking, which comprises the following steps: acquiring front and rear two frames of images with time interval of t, and extracting gray level images of the two frames of images; reducing the two frames of gray level images by m times, and then performing difference operation to obtain a difference image; carrying out double-threshold binarization on the difference map to obtain two binary images B1n and B2 n; b1n is processed and the foreground is extracted; screening and associated tracking are carried out on the extracted foreground; and analyzing the associated tracking information to detect the final moving target. The invention has the beneficial effects that: the method has the advantages that the lower operation complexity is realized, the detection accuracy is guaranteed, the false alarm caused by irrelevant interference factors such as sudden light change, image noise and wind blowing and grass movement in the conventional method is effectively solved, the requirements on hardware resources such as a memory and a CPU (Central processing Unit) are lower, and the practicability is high.
Description
Technical Field
The invention belongs to the technical field of video camera monitoring and video image processing, and particularly relates to a target tracking-based mobile detection method.
Background
Motion detection is a new type of monitoring technique for cameras, which is also known as motion detection. The technology has higher intelligence, can compare the image change condition of a monitoring area in real time, and can automatically alarm once an alarm threshold value is triggered.
The motion detection method can be mainly divided into three types: optical flow method (optical flow), background subtraction (background subtraction), and temporal differentiation (temporal differentiation). The optical flow method can extract the optical flow vector formed by the relative motion of an object and an image background so as to obtain the complete moving object information, has high accuracy, but has large calculation amount, is not easy to apply in real time, is very sensitive to noise and can only be used for some advanced cameras. Background subtraction firstly establishes a background image model based on a historical image, and then detects a motion region by using the difference between a current image and the background image, and the method has greatly reduced calculation amount, but cannot be applied to a low-performance camera and is sensitive to light. The time difference method, also called frame difference method, is to make a difference to two or three adjacent frames in the continuous image sequence, threshold the result to extract the motion area in the image, the method is easy to realize, the calculated amount is low, the real-time is high, it is widely used in some small-volume, portable family intelligent cameras, but the method is easy to cause the internal fragmentation to the moving object, it is difficult to extract the complete moving object information, and it is sensitive to the light.
In summary, the following problems exist in the current motion detection: false alarm caused by large light change, such as sudden light change caused by opening the indoor and turning off the lamp; false alarm caused by image noise, such as general illumination degree, for example, early morning and evening time; false alarm caused by irrelevant factors, such as meaningless and tiny changes of blowing grass and the like; the accuracy and the real-time performance of the algorithm are unbalanced; the existing methods do not track the detected moving target.
Disclosure of Invention
The invention aims to provide a target tracking-based movement detection method, and aims to solve the technical problem that tracking processing is not carried out on a detected moving target.
The invention is realized in such a way that a movement detection method based on target tracking comprises the following steps:
s01, acquiring a frame image Fn-1To Fn-1Performing gray level extraction to obtain a gray level image Gn-1To Gn-1Performing m times reduction to obtain a reduced gray scale image
S02 and Fn-1After a time t, acquiring the next frame of image FnTo FnPerforming gray level extraction to obtain a gray level image GnTo GnPerforming m times reduction to obtain a reduced gray scale image
S04, pair SnBinary image B1 is obtained by carrying out binarization operation based on THRESHOLD value THRESHOLD1 and THRESHOLD value THRESHOLD2nAnd B2n;
S05, pair B1nPerforming median filtering operation to obtain binary image B1n(m);
S06, pair B1n(m) performing Gaussian pyramid downsampling to obtain a binary image B1n(md);
S07, pair B1n(md) carrying out one-time etching operation to obtain a binary image B1n(mdd);
S08, pair B1n(mdd) performing Gaussian pyramid upsampling to obtain a binary image B1n(mddu);
S09 traversing the binary image B1n(mddu) to count pixels not equal to zero, and extract a moving region set An(a1,a2,…);
S10, set A of moving areasn(a1, a2, …) performing a filtering operation when a certain moving area A is presentn(i) If the width w and the height h of the frame satisfy the set conditions, the frame A is reservedn(i) And go to the next step, otherwise delete An(i);
S11, for the moving area A satisfying the step S10n(i) Combining binary mapsLike B2nMaking a statistical brightness of the average brightness if the brightness is satisfied>THRESHOLD3, then reserve An(i) Else delete An(i);
S12, filtering according to the steps S10 and S11 to obtain a mobile area set A'n(a1, a2, …) in combination with a gray scale mapCounting a histogram H corresponding to each moving area to form a histogram set Hn(h1,h2,…);
S13 moving area set A 'obtained from current circulation'n(a1, a2, …) and the moving set A 'obtained in the previous cycle'n-1(a1, a2, …) calculating OLR by overlapping ratioijIf OLR is presentij>r, preliminary determination to A'n(i) And A'n-1(j) Are related;
s14, pair A'n(a1, a2, …) and A'n-1(a1, a2, …) all of A 'satisfying step S13'n(i) And A'n-1(j) Histogram comparison to obtain cpijFinally, a set CP (CP) is formed11,cp12,…);
S15, in the set CP (CP)11,cp12…) finding the maximum value cpij maxIs considered to be A'n(i) And A'n-1(j) Updating the moving area information when the moving objects are the same moving target;
s16, traversing the moving target set A'n(a1, a2, …) of each moving target A'n(i) Historical movement trace information OBJn(i) Maximum distance when moving object moves>THRESHOLD4, and OBJn(i) Number num of included history moving area information>THRESHOLD5, considering the moving target as the detected moving target, and outputting the rectangular frame and speed direction of the moving area;
s17, judging whether the detection is finished or not, if so, finishing the detection; otherwise, the current image F is processednAs an image Fn-1Will gray scale mapAsGo to step S2 to continue execution.
The further technical scheme of the invention is as follows: the value range of the time threshold t in the step S02 is [70,200] milliseconds.
The further technical scheme of the invention is as follows: the binarization operation in the step S04 is defined as:
the THRESHOLD1 is set to a value in the range of [20, 30]]The THRESHOLD2 is set to a value in the range of [30, 40]]And THRESHOLD2 > THRESHOLD 1. To SnPerforming a binarization operation based on THRESHOLD2, wherein THRESHOLD2 is required to satisfy THRESHOLD2>THRESHOLD1。
The further technical scheme of the invention is as follows: the value range of the median filtered kernel size in step S05 is [3, 5 ].
The further technical scheme of the invention is as follows: the moving area a in step S10n(i) The width w and the height h of (a) need to satisfy the following conditions:
minw<An(i).w<maxw
minh<An(i).h<maxh。
the further technical scheme of the invention is as follows: the value ranges of minw and minh are both [6,30 ]]The range of maxw is [0.8 × B1 ]n。width,B1n。width]The range of maxy is [0.8 × B1 ]n。height,B1n。height]。
The further technical scheme of the invention is as follows: the average brightness calculation formula in step S11 is
The THRESHOLD3 has a value in the range of [0,10 ]. Where a is the movement region, a.minx is the horizontal coordinate minimum of the movement region, a.miny is the vertical coordinate minimum of the movement region, a.maxx is the horizontal coordinate maximum of the movement region, a.maxy is the vertical coordinate maximum of the movement region, A.w is the width of the movement region, and A.h is the height of the movement region.
The further technical scheme of the invention is as follows: the histogram statistical formula in step S12 is as follows:
wherein, A.minx<x<A.maxx,A.miny<y<A.maxy,I.e. corresponding gray scale map of moving regionPixel value of (2).
The further technical scheme of the invention is as follows: the overlap ratio calculation formula of the two movement regions in step S13 is as follows:
OLRij=mx*my*2/(Ai.area+Aj.area)
mx=min(Ai.maxx,Aj.maxx)-max(Ai.minx,Aj.minx)
my=min(Ai.maxy,Aj.maxy)-max(Ai.miny,Aj.miny)
Ai.area=Ai.width*Ai.height
Aj.area=Aj.width*Aj.height
the threshold r is in the range of [0,0.5 ].
The further technical scheme of the invention is as follows: the two histogram comparison formulas in step S14 are as follows:
the further technical scheme of the invention is as follows: the updated moving area information in step S15 includes the rectangular frame and the velocity of the moving object:
A‘n-1(j)=UPDATE_RATE*A‘n(i)+(1-UPDATE_RATE)*A‘n-1(j)
A‘n(i).spx=(A‘n(i).x+A‘n(i).width/2)–(A‘n-1(j).x+A‘n-1(j).width/2)
A‘n(i).spy=(A‘n(i).y+A‘n(i).height/2)–(A‘n-1(j).y+A‘n-1(j).height/2)
A‘n-1(i).spx=UPDATE_RATE*A‘n(i).spx+(1-UPDATE_RATE)*A‘n-1(i).spx
A‘n-1(i).spy=UPDATE_RATE*A‘n(i).spy+(1-UPDATE_RATE)*A‘n-1(i).spy
wherein, the value range of the smoothing factor UPDATE _ RATE is [0.2,0.8 ].
The further technical scheme of the invention is as follows: the maximum distance that the moving object moves in step S16 is calculated by the following formula:
distance=max(mdx,mdy)
mdx=max(OBJn(i).x)–min(OBJn(i).x)
mdy=max(OBJn(i).y)–min(OBJn(i).y)
the THRESHOLD4 has a value range of [24,80], the unit is a pixel, and the THRESHOLD5 has a value range of [2,10 ].
The invention has the beneficial effects that: by adopting a double-threshold frame difference method, the method can effectively solve the problem that complete moving object information is not easy to extract by the existing method; in addition, the tracking algorithm is added, historical track information is established for the moving target, and the problem that false alarm is caused by sudden change of light and blowing of wind in the existing method is effectively solved; the invention adopts median filtering and corrosion operation, effectively solves the false alarm caused by image noise in the prior art; the invention has lower operation complexity, lower requirements on hardware resources such as memory, CPU and the like and strong implementability.
Drawings
Fig. 1 is a flowchart of a target tracking-based motion detection method according to an embodiment of the present invention.
Detailed Description
As shown in fig. 1, the movement detection method based on target tracking provided by the present invention is detailed as follows:
step S01, acquiring a frame image Fn-1To Fn-1Performing gray level extraction to obtain a gray level image Gn-1To Gn-1Performing m times reduction to obtain a reduced gray scale imageIn the embodiment of the present invention, the reduction factor m is set to 4. The reduction factor m may also be set according to the actual image resolution. Experiments prove that the resolution of the reduced image is more than 100 x 100, the detection complexity can be reduced, and the detection accuracy is ensured. After completion of step S01, the flow proceeds to S02 to wait for the next frame of image processing.
Step S02, and Fn-1After a time t, acquiring the next frame of image FnTo FnPerforming gray level extraction to obtain a gray level image GnTo GnPerforming m times reduction to obtain a reduced gray scale imageIf the time between the current time and the step S01 exceeds or equals to the time t, acquiring the next frame image FnTo FnPerforming gray level extraction to obtain a gray level image GnTo GnPerforming m times reduction to obtain a reduced gray scale imageOtherwise, the waiting is continued. In the present invention, the time threshold t is in the range of [70,200]]In milliseconds. The time threshold t may also be set according to actual use cases or experiments. Experiments prove that in the embodiment of the invention, t is set to be 125 milliseconds, so that the detection complexity can be reduced, and the detection accuracy is ensured. After completion of step S02, the process proceeds to S03 where the difference operation is performed on the two images.
Step S03, forAndperforming difference operation to obtain a difference value diagram Sn; to pairAndand performing difference operation to obtain a difference map Sn. In the embodiment of the present invention, the difference operation is defined as:
after the difference operation is completed, the process proceeds to S04 to binarize the difference map.
Step S04, carrying out binarization operation on Sn based on a THRESHOLD1 and a THRESHOLD2 to obtain binary images B1n and B2 n; in the embodiment of the present invention, the binarization operation is defined as:
the THRESHOLD1 is set to [20, 30]]The THRESHOLD2 is set to [30, 40]]And THRESHOLD2 > THRESHOLD 1. The lower THRESHOLD1 is to extract more complete moving object information and prevent internal fragmentation, while the higher THRESHOLD1 is to filter some moving objects with similar gray levels to the background, such as shadows and inversions. After the binarization operation is completed, the flow proceeds to step S05, 06, 07, 08 for B1nIs subjected to treatment, B2nWill be used for the average luminance calculation in step S08.
Step S05, 06, 07, 08 for B1nPerforming median filtering operation to obtain binary image B1n(m); for B1n(m) performing Gaussian pyramid downsampling to obtain a binary image B1n(md); for B1n(md) carrying out one-time etching operation to obtain a binary image B1n(mdd); for B1n(mdd) performing Gaussian pyramid upsampling to obtain a binary image B1n(mddu); will be paired with the binary image B1nPerforming median filtering operation with kernel size of size, Gaussian pyramid down-sampling operation with parameter of GAUSSIAN _5x5, primary corrosion operation, and Gaussian pyramid up-sampling operation with parameter of GAUSSIAN _5x5 to obtain processed binary image B1n(mddu). When the image is median filtered, the existing median filtering algorithm can be adopted. In the embodiment of the invention, the value range of the size is [3, 5]]The setting can be carried out according to actual use conditions or experiments; when the Gaussian pyramid downsampling operation is carried out on the image, the operation can be carried out by adopting the existing Gaussian pyramid downsampling algorithm, and the Gaussian kernel size is 5x 5; when the image is corroded, the existing corrosion algorithm can be adopted; when the gaussian pyramid upsampling operation is performed on the image, the operation can be performed by adopting the existing gaussian pyramid upsampling algorithm, and the gaussian kernel size is 5x 5. After the completion of the processing, the flow proceeds to step S09 for B1n(mddu) foreground extraction was performed.
Step S09, traversing the binary image B1n(mddu) to count pixels not equal to zero, and extract a moving region set An(a1, a2, …); in the embodiment of the invention, the extraction of the moving area information can be performed by adopting the existing foreground extraction algorithm. After the foreground is extracted, the process proceeds to step S10 to perform preliminary screening on the moving region set.
Step S10, set of moving areas An(a1, a2, …) performing a filtering operation when a certain moving area A is presentn(i) If the width w and the height h of the frame satisfy the set conditions, the frame A is reservedn(i) And go to the next step, otherwise delete An(i) (ii) a Will be paired with the moving area set An(a1, a2, …) performing a filtering operation for each of the moving areas, when a certain moving area An(i) The width and height of the key satisfy the set conditions, and A is reservedn(i) And making next judgment, otherwise deleting An(i) In that respect In the embodiment of the invention, the value ranges of minw and minh are both [ 630 ]]The range of maxw is [0.8 × B1 ]n。width,B1n。width]The range of maxy is [0.8 × B1 ]n。height,B1n。height]It can be determined according to the actual use requirement. After the preliminary screening is completed, the process proceeds to step S11 for further screening.
Step S11, for the moving area A satisfying the step S10n(i) Combined binary image B2nMaking a statistical brightness of the average brightness if the brightness is satisfied>THRESHOLD3, then reserve An(i) Else delete An(i) (ii) a Will be paired with the moving area a satisfying step S11n(i) At B2nThe average brightness is calculated and judged, if the average brightness is larger than THRESHOLD3, then A is consideredn(i) The obvious difference from the background is that the target is moved and needs to be reserved, otherwise A is deletedn(i) In that respect In the embodiment of the present invention, the THRESHOLD3 has a value range of [ 010 []The higher the THRESHOLD3 value is, the better the effect of filtering some shadows and false inverted shadows is, but some false negative reports may be caused. For the moving area satisfying the condition, the flow proceeds to step S12 to start association tracking.
Step S12, filtering according to step S10 and step S11 to obtain a moving area set A'n(a1, a2, …) in combination with a gray scale mapCounting a histogram H corresponding to each moving area to form a histogram set Hn(h1, h2, …); set A 'of the moving regions filtered in steps S10 and S11'n(a1, a2, …) in grayscaleThe histogram h corresponding to each moving region is counted, and the histogram statistics can describe the features of the image to some extent, so that the histogram statistics is used for the comparison calculation of the similarity between the two moving regions in step S14. After the histogram statistics is completed, the flow proceeds to step S13 to perform a preliminary judgment of association tracking.
In step S13, a current loop is subjected to a motion region set A'n(a1, a2, …) and the moving set A 'obtained in the previous cycle'n-1(a1, a2, …) calculating OLR by overlapping ratioijIf OLR is presentij>r, preliminary determination to A'n(i) And A'n-1(j) Are related; a 'is a moving region set obtained from the current cycle'n(a1, a2, …) with the moving set A 'of the previous cycle'n-1(a1, a2, …) calculating OLR by overlapping ratioijIf OLR is presentij>r, preliminary determination to A'n(i) And A'n-1(j) If they are related, they may be the same moving target, and go to step S11 to perform similarity calculation; if A'n(i) And aggregate A'n-1(a1, a2, …) if all the moving areas are not associated, the target is considered to be a new target, and the process proceeds to step S15. In the embodiment of the invention, the value range of r is [0,0.5]]According to actual use conditions or experimental settings, the larger the r value is, the lower the success rate of tracking the target with too high moving speed is.
Step S14, for A'n(a1, a2, …) and A'n-1(a1, a2, …) all of A 'satisfying step S13'n(i) And A'n-1(j) Histogram comparison to obtain cpijFinally, a set CP (CP) is formed11,cp12…); will be to A'n(a1, a2, …) and A'n-1(a1, a2, …) all relevant combinations were subjected to similarity calculation, i.e. A 'with an overlap greater than r'n(i) And A'n-1(j) Histogram comparison to obtain cpijForming a set CP (CP)i1,cpi2…), the maximum value cp is finally found outik max(ii) a Then is considered to be A'n(i) And A'n-1(k) The process proceeds to step S15, where the target is the same moving object. The histogram comparison calculation can improve the accuracy of target tracking.
Step S15, in the set CP (CP)11,cp12…) finding the maximum value cpij maxIs considered to be A'n(i) And A'n-1(j) Updating the moving area information when the moving objects are the same moving target; history of moving objectsUpdating information, namely updating a moving target with the latest moving information by smoothing; counting the moving target without the latest moving information, and if the moving target still has no latest moving information of the target after a certain time t1, considering that the target is still static or disappears, and deleting all information of the target; for a moving target which appears for the first time, establishing historical information of the moving target; in the embodiment of the invention, the length value range of the stored historical information of the moving target is [ 36 ]]In seconds; in the embodiment of the present invention, the value range of the smoothing factor UPDATE _ RATE is [0.20.8 ]]The smaller the value of the UPDATE _ RATE is, the more stable the historical track of the moving target is, and the adverse effect caused by the error tracking can be weakened to a certain extent; the larger the value of the UPDATE _ RATE is, the more timely the mobile target information is updated, and the sensitivity of mobile detection can be improved to a certain extent; can be set according to actual use conditions or experiments. After the processing in step S15 is completed, the process proceeds to S16 to analyze the moving object history information.
Step S16, traversing the moving target set A'n(a1, a2, …) of each moving target A'n(i) Historical movement trace information OBJn(i) Maximum distance when moving object moves>THRESHOLD4, and OBJn(i) Number num of included history moving area information>THRESHOLD5, considering the moving target as the detected moving target, and outputting the rectangular frame and speed direction of the moving area; analyzing the existing history information of the moving target, when the maximum moving distance of the moving target in the statistical time period is greater than THRESHOLD4, and the OBJn(i) If the number of the history moving area information included is greater than THRESHOLD5, that is, if the moving object moves for a period of time, the object is considered as the detected moving object, and the rectangular frame and the speed direction of the moving area are output. In the embodiment of the present invention, the THRESHOLD4 has a value range of [ 2480 ]]The unit is pixel, and the THRESHOLD5 is in the range of [ 210 ]]The larger the THRESHOLD4 and 5 values are, the larger the distance and the longer the duration of the movement of the target are, the lower the detection sensitivity is, so that the sudden change of light and the transient movement caused by unknown factors can be effectively eliminatedDynamic false alarm; on the contrary, the sensitivity of detection can be improved and can be set according to actual use conditions or experiments. After the step S16 is completed, the process proceeds to step S17 to determine whether the detection is finished.
Step S17, judging whether to finish the detection, if yes, finishing the detection; otherwise, the current image F is processednAs an image Fn-1Will gray scale mapAsGo to step S2 to continue execution.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (10)
1. A motion detection method based on target tracking is characterized by comprising the following steps:
s01, acquiring a frame image Fn-1To Fn-1Performing gray level extraction to obtain a gray level image Gn-1To Gn-1Performing m times reduction to obtain a reduced gray scale image
S02 and Fn-1After a time t, acquiring the next frame of image FnTo FnPerforming gray level extraction to obtain a gray level image GnTo GnPerforming m times reduction to obtain a reduced gray scale image
S04, pair SnBinary image B1 is obtained by carrying out binarization operation based on THRESHOLD value THRESHOLD1 and THRESHOLD value THRESHOLD2nAnd B2n;
S05, pair B1nPerforming median filtering operation to obtain binary image B1n(m);
S06, pair B1n(m) performing Gaussian pyramid downsampling to obtain a binary image B1n(md);
S07, pair B1n(md) carrying out one-time etching operation to obtain a binary image B1n(mdd);
S08, pair B1n(mdd) performing Gaussian pyramid upsampling to obtain a binary image B1n(mddu);
S09 traversing the binary image B1n(mddu) to count pixels not equal to zero, and extract a moving region set An(a1,a2,…);
S10, set A of moving areasn(a1, a2, …) performing a filtering operation when a certain moving area A is presentn(i) If the width w and the height h of the frame satisfy the set conditions, the frame A is reservedn(i) And go to the next step, otherwise delete An(i);
S11, for the moving area A satisfying the step S10n(i) Combined binary image B2nMaking a statistical brightness of the average brightness if the brightness is satisfied>THRESHOLD3, then reserve An(i) Else delete An(i);
S12, filtering according to the steps S10 and S11 to obtain a mobile area set A'n(a1, a2, …) in combination with a gray scale mapCounting a histogram H corresponding to each moving area to form a histogram set Hn(h1,h2,…);
S13, obtaining a moving area set for the current circulationA‘n(a1, a2, …) and the moving set A 'obtained in the previous cycle'n-1(a1, a2, …) calculating OLR by overlapping ratioijIf OLR is presentij>r, preliminary determination to A'n(i) And A'n-1(j) Are related;
s14, pair A'n(a1, a2, …) and A'n-1(a1, a2, …) all of A 'satisfying step S13'n(i) And A'n-1(j) Histogram comparison to obtain cpijFinally, a set CP (CP) is formed11,cp12,…);
S15, in the set CP (CP)11,cp12…) finding the maximum value cpij maxIs considered to be A'n(i) And A'n-1(j) Updating the moving area information when the moving objects are the same moving target;
s16, traversing the moving target set A'n(a1, a2, …) of each moving target A'n(i) Historical movement trace information OBJn(i) Maximum distance when moving object moves>THRESHOLD4, and OBJn(i) Number num of included history moving area information>THRESHOLD5, considering the moving target as the detected moving target, and outputting the rectangular frame and speed direction of the moving area;
2. The method for motion detection based on object tracking as claimed in claim 1, wherein the time threshold t in step S02 is in a range of [70,200] ms.
4. The object tracking-based motion detection method according to claim 1, wherein the median filtered kernel size in step S05 is in a range of [3, 5 ].
5. The method for detecting movement based on object tracking of claim 1, wherein the moving area A in step S10n(i) The width w and the height h of (a) need to satisfy the following conditions:
minw<An(i).w<maxw
minh<An(i).h<maxh。
6. the method of claim 5, wherein the minw and minh ranges are both [6,30 ]]The range of maxw is [0.8 × B1 ]n。width,B1n. width]The range of maxy is [0.8 × B1 ]n. height,B1n. height]。
8. The method for motion detection based on object tracking as claimed in claim 1, wherein the overlap ratio of the two motion areas in step S13 is calculated as follows:
OLRij=mx*my*2/(Ai.area+Aj.area)
mx=min(Ai.maxx,Aj.maxx)-max(Ai.minx,Aj.minx)
my=min(Ai.maxy,Aj.maxy)-max(Ai.miny,Aj.miny)
Ai.area=Ai.width*Ai.height
Aj.area=Aj.width*Aj.height
the threshold r is in the range of [0,0.5 ].
9. The method for motion detection based on object tracking as claimed in claim 1, wherein the updated motion region information in step S15 includes the rectangular frame and velocity of the moving object:
A‘n-1(j)=UPDATE_RATE*A‘n(i)+(1-UPDATE_RATE)*A‘n-1(j)
A‘n(i).spx=(A‘n(i).x+A‘n(i).width/2)–(A‘n-1(j).x+A‘n-1(j).width/2)
A‘n(i).spy=(A‘n(i).y+A‘n(i).height/2)–(A‘n-1(j).y+A‘n-1(j).height/2)
A‘n-1(i).spx=UPDATE_RATE*A‘n(i).spx+(1-UPDATE_RATE)*A‘n-1(i).spx
A‘n-1(i).spy=UPDATE_RATE*A‘n(i).spy+(1-UPDATE_RATE)*A‘n-1(i).spy
wherein, the value range of the smoothing factor UPDATE _ RATE is [0.2,0.8 ].
10. The method for motion detection based on object tracking as claimed in claim 1, wherein the maximum distance that the moving object moves in step S16 is calculated according to the following formula:
distance=max(mdx,mdy)
mdx=max(OBJn(i).x)–min(OBJn(i).x)
mdy=max(OBJn(i).y)–min(OBJn(i).y)
the THRESHOLD4 has a value range of [24,80], the unit is a pixel, and the THRESHOLD5 has a value range of [2,10 ].
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711444190.6A CN108198208B (en) | 2017-12-27 | 2017-12-27 | Movement detection method based on target tracking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711444190.6A CN108198208B (en) | 2017-12-27 | 2017-12-27 | Movement detection method based on target tracking |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108198208A CN108198208A (en) | 2018-06-22 |
CN108198208B true CN108198208B (en) | 2021-08-24 |
Family
ID=62584556
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711444190.6A Active CN108198208B (en) | 2017-12-27 | 2017-12-27 | Movement detection method based on target tracking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108198208B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109165552B (en) * | 2018-07-14 | 2021-02-26 | 深圳神目信息技术有限公司 | Gesture recognition method and system based on human body key points and memory |
CN109461116B (en) * | 2018-10-16 | 2023-04-28 | 浩云科技股份有限公司 | 720 panorama unfolding monitoring method based on opengl |
CN109263557B (en) * | 2018-11-19 | 2020-10-09 | 威盛电子股份有限公司 | Vehicle blind area detection method |
CN109995964B (en) * | 2019-02-21 | 2021-08-17 | 西安万像电子科技有限公司 | Image data processing method and device |
CN110414443A (en) * | 2019-07-31 | 2019-11-05 | 苏州市科远软件技术开发有限公司 | A kind of method for tracking target, device and rifle ball link tracking |
US11188756B2 (en) * | 2019-10-16 | 2021-11-30 | Realtek Singapore Private Limited | Object localization and classification system and method thereof |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101751679A (en) * | 2009-12-24 | 2010-06-23 | 北京中星微电子有限公司 | Sorting method, detecting method and device of moving object |
CN106296725A (en) * | 2015-06-12 | 2017-01-04 | 富泰华工业(深圳)有限公司 | Moving target detects and tracking and object detecting device in real time |
CN107368786A (en) * | 2017-06-16 | 2017-11-21 | 华南理工大学 | A kind of passenger based on machine vision crosses handrail detection algorithm |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7239719B2 (en) * | 2003-08-22 | 2007-07-03 | Bbn Technologies Corp. | Automatic target detection and motion analysis from image data |
CN105096321B (en) * | 2015-07-24 | 2018-05-18 | 上海小蚁科技有限公司 | A kind of low complex degree Motion detection method based on image border |
CN106713920A (en) * | 2017-02-22 | 2017-05-24 | 珠海全志科技股份有限公司 | Mobile detection method and device based on video encoder |
CN106991418B (en) * | 2017-03-09 | 2020-08-04 | 上海小蚁科技有限公司 | Winged insect detection method and device and terminal |
-
2017
- 2017-12-27 CN CN201711444190.6A patent/CN108198208B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101751679A (en) * | 2009-12-24 | 2010-06-23 | 北京中星微电子有限公司 | Sorting method, detecting method and device of moving object |
CN106296725A (en) * | 2015-06-12 | 2017-01-04 | 富泰华工业(深圳)有限公司 | Moving target detects and tracking and object detecting device in real time |
CN107368786A (en) * | 2017-06-16 | 2017-11-21 | 华南理工大学 | A kind of passenger based on machine vision crosses handrail detection algorithm |
Non-Patent Citations (4)
Title |
---|
一种基于改进ViBe的运动目标检测方法;吴剑舞 等;《计算机与现代化》;20151231(第7期);第50-53页 * |
基于上下文的目标检测研究;李涛;《中国博士学位论文全文数据库 信息科技辑》;20170215(第02期);论文第77-87页 * |
复杂背景下视频运动目标检测与跟踪算法研究;刘定通;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315(第03期);全文 * |
视频对象分割技术及应用;侯伟;《中国优秀硕士学位论文全文数据库 信息科技辑》;20081115(第11期);论文第41-56页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108198208A (en) | 2018-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108198208B (en) | Movement detection method based on target tracking | |
WO2021012757A1 (en) | Real-time target detection and tracking method based on panoramic multichannel 4k video images | |
WO2022027931A1 (en) | Video image-based foreground detection method for vehicle in motion | |
WO2021208275A1 (en) | Traffic video background modelling method and system | |
CN108198206A (en) | The multi-object tracking method combined based on multiple features combining and Camshift algorithms | |
CN110599523A (en) | ViBe ghost suppression method fused with interframe difference method | |
WO2022099598A1 (en) | Video dynamic target detection method based on relative statistical features of image pixels | |
CN102222214A (en) | Fast object recognition algorithm | |
CN111723644A (en) | Method and system for detecting occlusion of surveillance video | |
WO2023273010A1 (en) | High-rise littering detection method, apparatus, and device, and computer storage medium | |
CN104835145B (en) | Foreground detection method based on adaptive Codebook background models | |
CN104063885A (en) | Improved movement target detecting and tracking method | |
CN101739550A (en) | Method and system for detecting moving objects | |
CN111444854A (en) | Abnormal event detection method, related device and readable storage medium | |
CN104952256A (en) | Video information based method for detecting vehicles at intersection | |
WO2012174804A1 (en) | Method and apparatus for detecting violent motion in video | |
CN110309765B (en) | High-efficiency detection method for video moving target | |
CN111741186A (en) | Video jitter detection method, device and system | |
CN104766079A (en) | Remote infrared weak object detecting method | |
CN106934819A (en) | A kind of method of moving object segmentation precision in raising image | |
CN114022468B (en) | Method for detecting article left-over and lost in security monitoring | |
CN112084957B (en) | Mobile target retention detection method and system | |
Lian et al. | A novel method on moving-objects detection based on background subtraction and three frames differencing | |
Gao et al. | Moving object detection for video surveillance based on improved ViBe | |
CN113936242B (en) | Video image interference detection method, system, device and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |