CN111832492A - Method and device for distinguishing static traffic abnormality, computer equipment and storage medium - Google Patents
Method and device for distinguishing static traffic abnormality, computer equipment and storage medium Download PDFInfo
- Publication number
- CN111832492A CN111832492A CN202010687229.2A CN202010687229A CN111832492A CN 111832492 A CN111832492 A CN 111832492A CN 202010687229 A CN202010687229 A CN 202010687229A CN 111832492 A CN111832492 A CN 111832492A
- Authority
- CN
- China
- Prior art keywords
- video image
- current frame
- image
- frame video
- target detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 230000003068 static effect Effects 0.000 title claims abstract description 42
- 230000005856 abnormality Effects 0.000 title claims abstract description 27
- 238000001514 detection method Methods 0.000 claims abstract description 114
- 238000012544 monitoring process Methods 0.000 claims abstract description 54
- 230000002159 abnormal effect Effects 0.000 claims abstract description 26
- 238000004590 computer program Methods 0.000 claims description 14
- 238000010586 diagram Methods 0.000 claims description 13
- 230000001629 suppression Effects 0.000 claims description 10
- 238000012216 screening Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 abstract description 3
- 238000010276 construction Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 abstract 1
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012806 monitoring device Methods 0.000 description 2
- 206010000117 Abnormal behaviour Diseases 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method, a device, computer equipment and a storage medium for judging static traffic abnormality, wherein the method comprises the steps of acquiring a target detection frame of a current frame video image and comparing the target detection frame with a target detection frame of a previous frame video image to obtain whether the traffic state of the current frame video image is a suspicious state or not when the current frame video image in a monitoring video is received; if the traffic state of the current frame video image is a suspicious state, acquiring the initial time and the end time of the suspicious state; and finally, judging whether the duration of the initial time and the ending time exceeds a preset time threshold value to obtain whether the suspicious state is an abnormal state. The invention is based on an image processing technology, belongs to the technical field of intelligent traffic, can be applied to an intelligent traffic scene to promote the construction of an intelligent city, judges the traffic in a monitoring video by analyzing the position of a target detection frame of a video image, and solves the problem of inaccurate detection of abnormal conditions in road traffic abnormality detection.
Description
Technical Field
The invention belongs to the technical field of intelligent traffic, and particularly relates to a method and a device for distinguishing static traffic abnormality, computer equipment and a storage medium.
Background
Road traffic anomaly detection is a challenging area of research in computer vision. Due to scarcity of road traffic abnormal condition data, unknown abnormal behaviors and variability of traffic scenes, such as various weather, road conditions, shooting visual angles and other factors, road traffic abnormal detection always faces problems of unbalanced normal abnormal data, insufficient corpus with high-quality annotations, difficult definition of abnormal scenes and the like. Meanwhile, the detection of road traffic abnormality is also important for the intelligent traffic system, and is an indispensable link for planning, monitoring and managing the intelligent traffic system.
In the prior art, the anomaly detection of road traffic is usually realized by performing semi-supervised learning through a depth self-encoder, wherein the depth self-encoder trains a self-encoding model by adopting enough normal samples, and when the model learns the distribution condition of the normal samples, the self-encoding model can be reconstructed, but when the model faces an abnormal sample, a larger error is generated, and the abnormal sample is easily influenced by noise, so that the problem of larger error of the anomaly detection of the road traffic is caused; in addition, in the prior art, the abnormal vehicle track of the target vehicle in the video image can be analyzed to find the abnormal occurrence time and place, but other vehicles in the video image are easy to interfere with the vehicle track of the target vehicle, so that misjudgment on road traffic is easily caused.
Disclosure of Invention
The embodiment of the invention provides a method, a device, computer equipment and a storage medium for judging static traffic abnormity, and aims to solve the problems that in the prior art, the detection of the road traffic abnormity is inaccurate and misjudgment is easy to occur.
In a first aspect, an embodiment of the present invention provides a method for determining a static traffic anomaly, including:
if a current frame video image in a preset monitoring video is received, acquiring a target detection frame of the current frame video image;
comparing the target detection frame of the current frame video image with the target detection frame of the previous frame video image to obtain whether the traffic state of the current frame video image is a suspicious state;
if the traffic state of the current frame video image in the monitoring video is a suspicious state, acquiring the initial time of the suspicious state according to a preset backtracking rule;
acquiring the end time of the suspicious state according to the video image behind the current frame video image;
and judging whether the duration of the initial time and the ending time exceeds a preset time threshold value or not so as to obtain whether the traffic in the current frame video image is in an abnormal state or not. .
In a second aspect, an embodiment of the present invention provides a device for determining a static traffic abnormality, including:
the target detection frame acquisition unit is used for acquiring a target detection frame of a current frame video image if the current frame video image in a preset monitoring video is received;
the comparison unit is used for comparing the target detection frame of the current frame video image with the target detection frame of the previous frame video image to obtain whether the traffic state of the current frame video image is a suspicious state;
the initial time determining unit is used for acquiring the initial time of the suspicious state according to a preset backtracking rule if the traffic state of the current frame video image in the monitoring video is the suspicious state;
an end time determining unit, configured to obtain an end time of the suspicious state according to a video image after the current frame video image;
and the judging unit is used for judging whether the duration of the initial time and the ending time exceeds a preset time threshold value so as to obtain whether the traffic in the current frame video image is in an abnormal state.
In a third aspect, an embodiment of the present invention further provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the method for determining a static traffic anomaly according to the first aspect.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the method for distinguishing a static traffic abnormality according to the first aspect.
The embodiment of the invention provides a method, a device, computer equipment and a storage medium for judging static traffic abnormity, wherein the position of a target detection frame of a video image is analyzed, whether the traffic state in a current frame video image enters a suspicious state is firstly determined, and on the basis of determining that the traffic in the current frame video image enters the suspicious state, whether the traffic in the current frame video image is in an abnormal state is further determined, so that the problems that the detection of the road traffic abnormity in the detection of the road traffic abnormity is inaccurate and misjudgment is easy to occur are solved, and the efficiency of the detection of the road traffic abnormity is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for determining a static traffic anomaly according to an embodiment of the present invention;
FIG. 2 is a schematic view of a sub-flow of a method for determining static traffic anomalies according to an embodiment of the present invention;
FIG. 3 is a schematic view of another sub-flow chart of the method for determining a static traffic anomaly according to the embodiment of the present invention;
FIG. 4 is a schematic view of another sub-flow chart of the method for determining a static traffic abnormality according to the embodiment of the present invention;
fig. 5 is a schematic block diagram of a static traffic anomaly determination device according to an embodiment of the present invention;
FIG. 6 is a schematic block diagram of sub-units of a static traffic anomaly determination device according to an embodiment of the present invention;
FIG. 7 is a schematic block diagram of another sub-unit of the apparatus for determining static traffic abnormality according to the embodiment of the present invention;
FIG. 8 is a schematic block diagram of another sub-unit of the apparatus for determining static traffic abnormality according to the embodiment of the present invention;
FIG. 9 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a method for determining a static traffic anomaly according to an embodiment of the present invention. The static state in the method for distinguishing the static traffic abnormality provided by the embodiment of the invention is the static state of a vehicle when the road traffic is congested, the monitoring video can be shot and obtained by a monitoring device at a traffic intersection, the monitoring device can upload the shot monitoring video to a terminal for executing the method for distinguishing the static traffic abnormality, the terminal can be a computer device used by a traffic management department, the computer device decodes the monitoring video after receiving the monitoring video to obtain a video image of each frame in the monitoring video, and then the video image of each frame in the decoded monitoring video is identified, contrasted and analyzed, so that whether the traffic in the monitoring video is abnormal or not is accurately obtained, and the abnormality is the traffic congestion condition in the monitoring video. The method and the device can be applied to smart traffic scenes, and therefore construction of smart cities is promoted. For example, when the traffic condition at the specific area position at the specific time needs to be acquired in urban road traffic, the traffic condition at the specific area position at the specific time can be known only by acquiring the corresponding monitoring video for analysis and detection, and if the detected result is abnormal, the traffic condition is in a congestion state, otherwise, the traffic condition is not in the congestion state.
As shown in fig. 1, the method includes steps S110 to S150.
S110, if a current frame video image in a preset monitoring video is received, acquiring a target detection frame of the current frame video image.
And if a current frame video image in a preset monitoring video is received, acquiring a target detection frame of the current frame video image. The target detection frame of the current frame video image is a detection frame containing a target vehicle in the current frame video image, that is, the target vehicle is in the target detection frame of the current frame video image, the target vehicle is a vehicle entering a preset position in the current frame video image, and the preset position may be a position in the middle of a road in the current frame video image. The target detection frame of the current frame video image comprises the characteristic information of the target vehicle, the characteristic information comprises the vehicle posture and the driving direction of the target vehicle, and the terminal is convenient to acquire the target detection frame corresponding to the target vehicle in the subsequent image frame according to the characteristic information. Specifically, a decoder in the terminal decodes the surveillance video to restore the surveillance video to a video image of one frame, and then performs target detection on each frame of video image in the surveillance video, and when a target vehicle is included in the video image, a target detection frame of the target vehicle can be obtained from the video image.
In an embodiment, as shown in fig. 2, step S110 includes sub-steps S111, S112 and S113.
And S111, acquiring a road image in the current frame video image based on a preset road mask image.
And acquiring the road image in the current frame video image based on a preset road mask image. The road mask map is a road mask map which does not contain a target vehicle in the road of the monitoring video, and the current video image contains the target vehicle. Because the contour of the road in the video image is fixed, the road of each frame of video image may or may not have a target vehicle, and the road of each frame of video image does not have a target vehicle, the road where the current frame of video image is located is covered by the road mask image, and then the current frame of video image is segmented to obtain the road image in the current frame of video image, thereby avoiding the need of detecting all areas of the current frame of video image in the subsequent process of detecting the target of the current frame of video image, and improving the efficiency of target detection.
And S112, acquiring a plurality of candidate frames from the road image in the current frame video image according to a preset target detection model.
And acquiring a plurality of candidate frames from the road image in the current frame video image according to a preset target detection model. Specifically, the target detection model is a model for extracting a rectangular bounding box containing feature information of a target vehicle from the current frame video image, where the rectangular bounding box is the candidate frames, and after the current frame video image is input into the target detection model, the target detection model outputs the candidate frames, where the candidate frames include a target detection frame, the candidate frames are candidate frames related to the target vehicle in the current frame video image, and all the candidate frames include image information of part or all of the target vehicles.
S113, screening out the target detection frame of the current frame video image from the candidate frames according to a preset non-maximum suppression algorithm.
And screening out the target detection frame of the current frame video image from the candidate frames according to a preset non-maximum value suppression algorithm. The non-maximum suppression algorithm is abbreviated as NMS algorithm, and is commonly used for edge detection, face detection, target detection, and the like in computer vision. In this embodiment, the non-maximum suppression algorithm is used to perform target detection on a video image in the surveillance video. Since a large number of candidate frames are generated at the same target position in the target detection process, and the candidate frames may overlap with each other, it is necessary to find the target detection frame from the candidate frames through a non-maximum suppression algorithm. And when the target detection model outputs the plurality of candidate frames, the confidence coefficient of each candidate frame in the plurality of candidate frames is simultaneously output, the confidence coefficient is the probability of the target vehicle in each candidate frame in the plurality of candidate frames, and the non-maximum suppression algorithm performs screening according to the confidence coefficient of each candidate frame in the plurality of candidate frames to obtain the target detection frame of the current frame video image. The specific flow of the non-maximum suppression algorithm is as follows: firstly, sorting according to the sequence from high to low of the confidence of each candidate frame in the plurality of candidate frames, eliminating the candidate frames with the confidence lower than a preset first threshold, calculating the area of each candidate frame in the candidate frames which are not eliminated, then respectively calculating IoU between the candidate frame with the highest confidence and the remaining candidate frames which are not eliminated, judging whether the calculated IoU exceeds a preset second threshold, if so, eliminating the remaining candidate frames which are not eliminated and are calculated by IoU between the candidate frames with the highest confidence and the candidate frames which are not eliminated, and finally obtaining the target detection frame of the current frame video image. The IoU intersection ratio is a concept used in target detection, and represents the overlapping rate or degree of the candidate frame and the original mark frame, i.e. the ratio of the intersection and union of the candidate frame and the original mark frame. In this embodiment, the preset first threshold is set to 0.3, and the preset second threshold is set to 0.5.
In one embodiment, as shown in fig. 3, step S112 includes sub-steps S1121, S1122, and S1123.
S1121, inputting the road image in the current frame video image into a pre-trained depth residual error network model to obtain a first feature map of the road image in the current frame video image.
And inputting the road image in the current frame video image into a pre-trained depth residual error network model to obtain a first characteristic diagram of the road image in the current frame video image. The depth residual error network model introduces a residual error block on the basis of a convolutional neural network to solve the problems that the error of a feature map obtained after feature extraction is carried out on the current frame video image by using the convolutional neural network is large and the accuracy rate is reduced. In the embodiment of the present invention, the depth residual network model adopts a ResNet50 residual network, the size of the road image in the current frame video image is 1024 × 1024, the ResNet50 residual network includes five convolution layer groups, the first convolution layer group includes 1 residual block, and the residual block is composed of 64 convolution kernels of 7 × 7; the second convolution layer group contains 3 identical residual blocks, and the residual blocks are composed of 64 convolution kernels of 1 × 1, 64 convolution kernels of 3 × 3 and 256 convolution kernels of 1 × 1 in sequence; the third convolution layer group contains 4 identical residual blocks, and the residual blocks sequentially consist of 128 1 × 1 convolution kernels, 128 3 × 3 convolution kernels and 512 1 × 1 convolution kernels; the fourth convolution layer group contains 6 identical residual blocks, and the residual blocks sequentially consist of 256 1 × 1 convolution kernels, 256 3 × 3 convolution kernels and 1024 1 × 1 convolution kernels; the fifth layer group contains 3 identical residual blocks, which in turn consist of 512 1 × 1, 512 3 × 3 and 2048 1 × 1 convolution kernels. And outputting 5 feature matrices after the road image in the current frame video image is input into the depth residual error network model, wherein the 5 feature matrices are the first feature map, and the dimensions of the 5 feature matrices are 512 × 512 × 32, 256 × 256 × 64, 128 × 128 × 256, 64 × 64 × 512, and 32 × 32 × 1024, respectively, wherein each dimension is height, width, and channel number.
And S1122, inputting the first feature map into a pre-trained feature pyramid network model to obtain a second feature map.
And inputting the first feature map into a pre-trained feature pyramid network model to obtain a second feature map. Specifically, the feature pyramid network model is a model designed for extracting features of an image according to a feature pyramid concept, and aims to improve the accuracy and speed of feature extraction, can replace a feature extractor in fast R-CNN and generate a feature map pyramid with higher quality, and is composed of a bottom part and a top part, wherein the bottom part is the feature extraction of a traditional convolution network, and as the convolution goes deep, the spatial resolution is reduced and the spatial information is lost, so that the feature pyramid network model is additionally sampled from top to bottom on the traditional convolution network, so that a second feature map is finally output after the first feature map is input to the feature pyramid network model. In an embodiment of the present invention, the first feature map is composed of 5 feature matrices, the dimensions of the 5 feature matrices are 512 × 512 × 32, 256 × 256 × 64, 128 × 128 × 256, 64 × 64 × 512, and 32 × 32 × 1024, respectively, after the 5 feature matrices are input to the feature pyramid network model, a corresponding new 5 feature matrices are finally output, that is, the second feature map, the dimensions of the new 5 feature matrices are 256 × 256 × 64, 128 × 128 × 64, 64 × 64 × 64, 32 × 32 × 64, and 16 × 16 × 64, where each dimension is a height, a width, and a number of channels.
S1123, inputting the second feature map into a pre-trained area generation network model to obtain a plurality of candidate frames of the current frame video image.
And inputting the second feature map into a pre-trained area generation network model to obtain a plurality of candidate frames of the current frame video image. Specifically, the area-generated network model is a model used for extracting the second feature map to obtain a plurality of candidate frames of the current frame video image, and after the second feature map is input into the area-generated network model, the anchor point of a sliding window with a preset size is used as a center to generate the plurality of candidate frames of the current frame video image through size transformation, in the embodiment of the present invention, the size of the sliding window is 3 × 3.
In an embodiment, before step S110, the method further includes: and acquiring the road mask image according to the monitoring video, wherein the road mask image is a mask image of a road in the monitoring video.
And acquiring the road mask image according to the monitoring video, wherein the road mask image is a mask image of a road in the monitoring video. The method comprises the steps that a road in a monitoring video is used as a foreground, an area except the road in the monitoring video is used as a background, in the process of obtaining a road mask image in the monitoring video, the monitoring video is decoded to obtain an image set of video images of a frame, then video images without target vehicles are selected from the image set and input into a preset background model, and then the road mask image can be obtained.
In one embodiment, as shown in FIG. 4, step S110a includes sub-steps S110a1, S110a2, and S110a 3.
S110a1, obtaining the video image which does not contain the target vehicle in the monitoring video as the target video image.
And acquiring a video image which does not contain the target vehicle in the monitoring video as a target video image. The video images which do not contain the target vehicle in the surveillance video are an image set of the video images of the surveillance video and are decoded by a decoder to obtain the surveillance video, and then the video images which contain the target vehicle in the image set are removed to obtain the target image, wherein the target image is a plurality of video images which are composed of the video images which do not contain the target vehicle in the image set.
S110a2, inputting the target video image into a preset background model to obtain a background image of the target video image.
And inputting the target video image into a preset background model to obtain a background image of the target video image. Specifically, the background image of the target video image is the background image of the surveillance video after the road is removed, the background model is a model used for obtaining the background image of the target video image from the target video image, and the background model may be a background model obtained through a ViBe algorithm, a background model obtained through a gaussian mixture modeling, or a background model obtained by performing background modeling on the video image of the surveillance video according to a convolutional neural network. In addition, since the target video image is composed of a plurality of video images not including the target vehicle, the inputting of the target video image into the preset background model is to input a plurality of video images not including the target vehicle into the background model, the background model outputs one background image for each of the plurality of video images not including the target vehicle, and then the output background images are calculated by using an averaging method to obtain an average background image of the target video image. The background model adopted in the embodiment of the invention is obtained by training the video image in the monitoring video through a convolutional neural network.
S110a3, acquiring the road mask image according to the background image of the target video image.
And acquiring the road mask image according to the background image of the target video image. Specifically, the background image of the target video image is a background image that does not include the road in the surveillance video, and the road mask image is obtained by calculating the background image of the target video image and the current frame video image by using a frame difference method.
S120, comparing the target detection frame of the current frame video image with the target detection frame of the previous frame video image to obtain whether the traffic state of the current frame video image is a suspicious state.
And comparing the target detection frame of the current frame video image with the target detection frame of the previous frame video image to obtain whether the traffic state of the current frame video image is a suspicious state. Specifically, the terminal obtains the data information of the target detection frame of the current frame video image, which includes the position information of the target detection frame of the current frame video image, and compares the position information of the target detection frame of the current frame video image with the position information of the target detection frame of the previous frame video image to determine whether the position of the target detection frame of the current frame video image is shifted relative to the position of the target detection frame of the previous frame video image, and if so, continues to determine the shift degree of the position of the target detection frame of the current frame video image, where the shift degree is the ratio of the area of the target detection frame of the previous frame video image not overlapped with the target detection frame of the current frame video image to the area of the target detection frame of the previous frame video image, and can learn the position of the target detection frame of the current frame video image is smaller than the preset shift degree by determining whether the shift degree of the position of the target detection frame of the current frame video image is smaller than the preset shift degree or not Whether the traffic is in a suspicious state is that the current frame video image in the surveillance video is suspected to enter an abnormal state, but it is not determined that the current frame video image has entered the abnormal state. And the target detection frame of the previous frame of video image is obtained by combining the target detection frame for identification on the basis of the target detection frame of the current frame of video image. It should be noted that, the method for acquiring the target detection frames of the video images other than the current frame is the same as the method for acquiring the target detection frame of the video image of the previous frame. In the embodiment of the present invention, the set offset is 0.5, and if the offset of the position of the target detection frame of the current frame video image is less than 0.5, it can be determined that the traffic in the current frame video image enters a suspicious state.
S130, if the traffic state of the current frame video image in the monitoring video is a suspicious state, acquiring the initial time of the suspicious state according to a preset backtracking rule.
If the traffic state of the current frame video image in the monitoring video is a suspicious state, acquiring the initial time of the suspicious state according to a preset backtracking rule. Specifically, the backtracking rule is rule information of an initial time of a suspicious state of traffic of the current frame video image in the surveillance video obtained by comparing target detection frames of two consecutive frames of video images before the current frame video image. The premise of comparison is that the video images of two continuous frames before the current frame video image have respective target detection frames, and if no detection frame exists, comparison is not needed. In addition, the terminal not only acquires the position information of the target detection frame of the video image in the surveillance video, but also acquires the time information of the video image corresponding to the target detection frame of the video image in the surveillance video. Since the traffic in the current frame video image is in the suspicious state and is not the initial time of the suspicious state of the traffic in the current frame video image in the surveillance video, the initial time of the suspicious state of the traffic in the current frame video image in the surveillance video can be obtained by comparing the target detection frames of two consecutive frames of video images before the current frame video image according to the backtracking rule. In the comparison process, the terminal may refer to step S120 to perform comparison to obtain the initial time of the suspicious state of the traffic of the current frame video image in the monitoring video.
S140, acquiring the end time of the suspicious state according to the video image after the current frame video image.
And acquiring the end time of the suspicious state according to the video image after the current frame video image. Specifically, the terminal compares the target detection frames of two consecutive frames of video images after the current frame of video image to obtain the initial time of the suspicious state of the traffic of the current frame of video image in the monitoring video. The premise of comparison is that the video images of two continuous frames behind the current frame video image have respective target detection frames, and if no detection frame exists, comparison is not needed. In the comparison process, the terminal may refer to step S120 to perform comparison to obtain the end time of the suspicious state of the traffic of the current frame video image in the monitoring video.
S150, judging whether the duration of the initial time and the ending time exceeds a preset time threshold value so as to obtain whether the traffic in the current frame video image is in an abnormal state.
And judging whether the duration of the initial time and the ending time exceeds a preset time threshold value so as to obtain whether the traffic in the current frame video image is in an abnormal state. Specifically, the abnormal state is a state that the traffic of the current frame video image in the surveillance video is congested, the terminal obtains the initial time and the end time to obtain a frequent time that the traffic of the current frame video image in the surveillance video enters a suspicious state, and then judges whether the frequent time is greater than a preset time threshold, if so, the traffic of the video image in the surveillance video is in the congested state, and if not, the traffic of the video image in the surveillance video is not in the congested state. In the embodiment of the invention, the preset time threshold is 10s, namely the time when the traffic of the current frame video image in the monitoring video enters the suspicious state is more than 10s, the traffic of the video image in the monitoring video can be judged to be in the congested state; and if the time that the traffic of the current video image in the monitoring video enters the suspicious state is less than 10s, judging that the traffic of the video image in the monitoring video is not in the congested state.
The embodiment of the present invention further provides a device 100 for determining a static traffic anomaly, which is used for implementing any embodiment of the method for determining a static traffic anomaly. Specifically, referring to fig. 5, fig. 5 is a schematic block diagram of a static traffic anomaly determination device 100 according to an embodiment of the present invention. The apparatus may be configured in a terminal.
As shown in fig. 5, the static traffic abnormality determination apparatus 100 includes a target detection frame acquisition unit 110, a comparison unit 120, an initial time determination unit 130, an end time determination unit 140, and a determination unit 150.
The target detection frame acquiring unit 110 is configured to acquire a target detection frame of a current frame video image if the current frame video image in a preset surveillance video is received.
In one embodiment, as shown in fig. 6, the object detection frame acquiring unit 110 includes a first acquiring unit 111, a second acquiring unit 112, and a screening unit 113.
A first obtaining unit 111, configured to obtain a road image in the current frame video image based on a preset road mask image.
A second obtaining unit 112, configured to obtain a plurality of candidate frames from the road image in the current frame video image according to a preset target detection model.
And a screening unit 113, configured to screen out the target detection frame of the current frame video image from the multiple candidate frames according to a preset non-maximum suppression algorithm.
In an embodiment, as shown in fig. 7, the second obtaining unit 112 includes a first feature map generating unit 1121, a second feature map generating unit 1122, and a third obtaining unit 1123.
A first feature map generating unit 1121, configured to input the road image in the current frame video image into a pre-trained depth residual error network model, so as to obtain a first feature map of the road image in the current frame video image.
The second feature map generating unit 1122 is configured to input the first feature map into a pre-trained feature pyramid network model to obtain a second feature map.
A third obtaining unit 1123, configured to input the second feature map into a pre-trained area-generating network model to obtain multiple candidate frames of the current frame video image.
In an embodiment, the apparatus 100 for determining static traffic anomaly further includes a road mask map obtaining unit 110 a.
A road mask map obtaining unit 110a, configured to obtain the road mask map according to the monitoring video, where the road mask map is a mask map of a road in the monitoring video.
In one embodiment, as shown in fig. 8, the road mask map acquisition unit 110a includes a video image acquisition unit 110a1, a background image acquisition unit 110a2, and a fourth acquisition unit 110a 3.
The video image acquiring unit 110a1 is configured to acquire, as a target video image, a video image of a target vehicle that is not included in the surveillance video.
A background image obtaining unit 110a2, configured to input the target video image into a preset background model to obtain a background image of the target video image.
A fourth obtaining unit 110a3, configured to obtain the road mask map according to a background image of the target video image.
The comparing unit 120 is configured to compare the target detection frame of the current frame video image with the target detection frame of the previous frame video image to obtain whether the traffic state of the current frame video image is a suspicious state.
The initial time determining unit 130 is configured to, if the traffic state of the current frame video image in the surveillance video is a suspicious state, obtain an initial time of the suspicious state according to a preset backtracking rule.
An end time determining unit 140, configured to obtain an end time of the suspicious state according to a video image after the current frame video image.
The determining unit 150 is configured to determine whether durations of the initial time and the end time exceed a preset time threshold, so as to determine whether traffic in the current frame video image is in an abnormal state.
The device 100 for judging static traffic abnormality provided by the embodiment of the invention is used for executing the method for judging static traffic abnormality, and the method comprises the steps of acquiring a target detection frame of a current frame video image if the current frame video image in a preset monitoring video is received; comparing the target detection frame of the current frame video image with the target detection frame of the previous frame video image to obtain whether the traffic state of the current frame video image is a suspicious state; if the traffic state of the current frame video image in the monitoring video is a suspicious state, acquiring the initial time of the suspicious state according to a preset backtracking rule; acquiring the end time of the suspicious state according to the video image behind the current frame video image; and judging whether the duration of the initial time and the ending time exceeds a preset time threshold value or not so as to obtain whether the traffic in the current frame video image is in an abnormal state or not.
Referring to fig. 9, fig. 9 is a schematic block diagram of a computer device according to an embodiment of the present invention.
Referring to fig. 9, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, causes the processor 502 to perform a method for determining static traffic anomalies.
The processor 502 is used to provide computing and control capabilities that support the operation of the overall computer device 500.
The internal memory 504 provides an environment for running the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 may be caused to execute a method for determining a static traffic abnormality.
The network interface 505 is used for network communication, such as providing transmission of data information. Those skilled in the art will appreciate that the configuration shown in fig. 9 is a block diagram of only a portion of the configuration associated with aspects of the present invention and is not intended to limit the computing device 500 to which aspects of the present invention may be applied, and that a particular computing device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Wherein the processor 502 is configured to run the computer program 5032 stored in the memory to implement the following functions: if a current frame video image in a preset monitoring video is received, acquiring a target detection frame of the current frame video image; comparing the target detection frame of the current frame video image with the target detection frame of the previous frame video image to obtain whether the traffic state of the current frame video image is a suspicious state; if the traffic state of the current frame video image in the monitoring video is a suspicious state, acquiring the initial time of the suspicious state according to a preset backtracking rule; acquiring the end time of the suspicious state according to the video image behind the current frame video image; and judging whether the duration of the initial time and the ending time exceeds a preset time threshold value or not so as to obtain whether the traffic in the current frame video image is in an abnormal state or not.
Those skilled in the art will appreciate that the embodiment of computer device 500 illustrated in FIG. 9 does not constitute a limitation on the particular configuration of computer device 500, and that in other embodiments, computer device 500 may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. For example, in some embodiments, the computer device 500 may only include the memory and the processor 502, and in such embodiments, the structure and function of the memory and the processor 502 are the same as those of the embodiment shown in fig. 9, and are not described herein again.
It should be understood that in the present embodiment, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general-purpose processors 502, a Digital Signal Processor 502 (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general-purpose processor 502 may be a microprocessor 502 or the processor 502 may be any conventional processor 502 or the like.
In another embodiment of the present invention, a storage medium is provided. The storage medium may be a non-volatile computer-readable storage medium. The storage medium stores a computer program 5032, wherein the computer program 5032 when executed by the processor 502 performs the steps of: if a current frame video image in a preset monitoring video is received, acquiring a target detection frame of the current frame video image; comparing the target detection frame of the current frame video image with the target detection frame of the previous frame video image to obtain whether the traffic state of the current frame video image is a suspicious state; if the traffic state of the current frame video image in the monitoring video is a suspicious state, acquiring the initial time of the suspicious state according to a preset backtracking rule; acquiring the end time of the suspicious state according to the video image behind the current frame video image; and judging whether the duration of the initial time and the ending time exceeds a preset time threshold value or not so as to obtain whether the traffic in the current frame video image is in an abnormal state or not.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, devices and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided by the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only a logical division, and there may be other divisions when the actual implementation is performed, or units having the same function may be grouped into one unit, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device 500 (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. A method for distinguishing static traffic abnormality is characterized by comprising the following steps:
if a current frame video image in a preset monitoring video is received, acquiring a target detection frame of the current frame video image;
comparing the target detection frame of the current frame video image with the target detection frame of the previous frame video image to obtain whether the traffic state of the current frame video image is a suspicious state;
if the traffic state of the current frame video image in the monitoring video is a suspicious state, acquiring the initial time of the suspicious state according to a preset backtracking rule;
acquiring the end time of the suspicious state according to the video image behind the current frame video image;
and judging whether the duration of the initial time and the ending time exceeds a preset time threshold value or not so as to obtain whether the traffic in the current frame video image is in an abnormal state or not.
2. The method for determining static traffic abnormality according to claim 1, wherein the obtaining of the target detection frame of the current frame video image includes:
acquiring a road image in the current frame video image based on a preset road mask image;
acquiring a plurality of candidate frames from the road image in the current frame video image according to a preset target detection model;
and screening out the target detection frame of the current frame video image from the candidate frames according to a preset non-maximum value suppression algorithm.
3. The method for determining static traffic abnormality according to claim 2, wherein the obtaining a plurality of candidate frames from the road image in the current frame video image according to a preset target detection model includes:
inputting the road image in the current frame video image into a pre-trained depth residual error network model to obtain a first characteristic diagram of the road image in the current frame video image;
inputting the first feature map into a pre-trained feature pyramid network model to obtain a second feature map;
and inputting the second feature map into a pre-trained area generation network model to obtain a plurality of candidate frames of the current frame video image.
4. The method for determining static traffic abnormality according to claim 2, wherein before the obtaining of the target detection frame of the current frame video image, the method further comprises:
and acquiring the road mask image according to the monitoring video, wherein the road mask image is a mask image of a road in the monitoring video.
5. The method for determining the static traffic abnormality according to claim 4, wherein the obtaining the road mask map according to the surveillance video includes:
acquiring a video image which does not contain a target vehicle in the monitoring video as a target video image;
inputting the target video image into a preset background model to obtain a background image of the target video image;
and acquiring the road mask image according to the background image of the target video image.
6. A device for discriminating a static traffic abnormality, comprising:
the target detection frame acquisition unit is used for acquiring a target detection frame of a current frame video image if the current frame video image in a preset monitoring video is received;
the comparison unit is used for comparing the target detection frame of the current frame video image with the target detection frame of the previous frame video image to obtain whether the traffic state of the current frame video image is a suspicious state;
the initial time determining unit is used for acquiring the initial time of the suspicious state according to a preset backtracking rule if the traffic state of the current frame video image in the monitoring video is the suspicious state;
an end time determining unit, configured to obtain an end time of the suspicious state according to a video image after the current frame video image;
and the judging unit is used for judging whether the duration of the initial time and the ending time exceeds a preset time threshold value so as to obtain whether the traffic in the current frame video image is in an abnormal state.
7. The apparatus for discriminating a static traffic abnormality according to claim 6, wherein the target detection frame acquisition unit includes:
the first acquisition unit is used for acquiring a road image in the current frame video image based on a preset road mask image;
the second acquisition unit is used for acquiring a plurality of candidate frames from the road image in the current frame video image according to a preset target detection model;
and the screening unit is used for screening out the target detection frame of the current frame video image from the candidate frames according to a preset non-maximum suppression algorithm.
8. The apparatus for discriminating a static traffic abnormality according to claim 7, wherein the second acquiring unit includes:
a first feature map generation unit, configured to input the road image in the current frame video image into a pre-trained depth residual error network model to obtain a first feature map of the road image in the current frame video image;
the second characteristic diagram generating unit is used for inputting the first characteristic diagram into a pre-trained characteristic pyramid network model to obtain a second characteristic diagram;
and the third acquisition unit is used for inputting the second feature map into a pre-trained area generation network model to acquire a plurality of candidate frames of the current frame video image.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method for discriminating between static traffic anomalies according to any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, characterized in that a computer program is stored which, when executed by a processor, causes the processor to execute the method of discriminating a static traffic abnormality according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010687229.2A CN111832492B (en) | 2020-07-16 | 2020-07-16 | Static traffic abnormality judging method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010687229.2A CN111832492B (en) | 2020-07-16 | 2020-07-16 | Static traffic abnormality judging method and device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111832492A true CN111832492A (en) | 2020-10-27 |
CN111832492B CN111832492B (en) | 2024-06-04 |
Family
ID=72923030
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010687229.2A Active CN111832492B (en) | 2020-07-16 | 2020-07-16 | Static traffic abnormality judging method and device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111832492B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112270298A (en) * | 2020-11-16 | 2021-01-26 | 北京深睿博联科技有限责任公司 | Method and device for identifying road abnormity, equipment and computer readable storage medium |
CN113052047A (en) * | 2021-03-18 | 2021-06-29 | 北京百度网讯科技有限公司 | Traffic incident detection method, road side equipment, cloud control platform and system |
CN113409587A (en) * | 2021-06-16 | 2021-09-17 | 北京字跳网络技术有限公司 | Abnormal vehicle detection method, device, equipment and storage medium |
CN114004886A (en) * | 2021-10-29 | 2022-02-01 | 中远海运科技股份有限公司 | Camera displacement judging method and system for analyzing high-frequency stable points of image |
CN114170301A (en) * | 2022-02-09 | 2022-03-11 | 城云科技(中国)有限公司 | Abnormal municipal facility positioning method and device and application thereof |
CN114821421A (en) * | 2022-04-28 | 2022-07-29 | 南京理工大学 | Traffic abnormal behavior detection method and system |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180137647A1 (en) * | 2016-11-15 | 2018-05-17 | Samsung Electronics Co., Ltd. | Object detection method and apparatus based on dynamic vision sensor |
WO2018130016A1 (en) * | 2017-01-10 | 2018-07-19 | 哈尔滨工业大学深圳研究生院 | Parking detection method and device based on monitoring video |
CN109191369A (en) * | 2018-08-06 | 2019-01-11 | 三星电子(中国)研发中心 | 2D pictures turn method, storage medium and the device of 3D model |
CN110263634A (en) * | 2019-05-13 | 2019-09-20 | 平安科技(深圳)有限公司 | Monitoring method, device, computer equipment and the storage medium of monitoring objective |
CN110390262A (en) * | 2019-06-14 | 2019-10-29 | 平安科技(深圳)有限公司 | Video analysis method, apparatus, server and storage medium |
CN110414313A (en) * | 2019-06-06 | 2019-11-05 | 平安科技(深圳)有限公司 | Abnormal behaviour alarm method, device, server and storage medium |
CN110427800A (en) * | 2019-06-17 | 2019-11-08 | 平安科技(深圳)有限公司 | Video object acceleration detection method, apparatus, server and storage medium |
CN110738686A (en) * | 2019-10-12 | 2020-01-31 | 四川航天神坤科技有限公司 | Static and dynamic combined video man-vehicle detection method and system |
CN110969640A (en) * | 2018-09-29 | 2020-04-07 | Tcl集团股份有限公司 | Video image segmentation method, terminal device and computer-readable storage medium |
-
2020
- 2020-07-16 CN CN202010687229.2A patent/CN111832492B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180137647A1 (en) * | 2016-11-15 | 2018-05-17 | Samsung Electronics Co., Ltd. | Object detection method and apparatus based on dynamic vision sensor |
WO2018130016A1 (en) * | 2017-01-10 | 2018-07-19 | 哈尔滨工业大学深圳研究生院 | Parking detection method and device based on monitoring video |
CN109191369A (en) * | 2018-08-06 | 2019-01-11 | 三星电子(中国)研发中心 | 2D pictures turn method, storage medium and the device of 3D model |
CN110969640A (en) * | 2018-09-29 | 2020-04-07 | Tcl集团股份有限公司 | Video image segmentation method, terminal device and computer-readable storage medium |
CN110263634A (en) * | 2019-05-13 | 2019-09-20 | 平安科技(深圳)有限公司 | Monitoring method, device, computer equipment and the storage medium of monitoring objective |
CN110414313A (en) * | 2019-06-06 | 2019-11-05 | 平安科技(深圳)有限公司 | Abnormal behaviour alarm method, device, server and storage medium |
CN110390262A (en) * | 2019-06-14 | 2019-10-29 | 平安科技(深圳)有限公司 | Video analysis method, apparatus, server and storage medium |
CN110427800A (en) * | 2019-06-17 | 2019-11-08 | 平安科技(深圳)有限公司 | Video object acceleration detection method, apparatus, server and storage medium |
CN110738686A (en) * | 2019-10-12 | 2020-01-31 | 四川航天神坤科技有限公司 | Static and dynamic combined video man-vehicle detection method and system |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112270298A (en) * | 2020-11-16 | 2021-01-26 | 北京深睿博联科技有限责任公司 | Method and device for identifying road abnormity, equipment and computer readable storage medium |
CN113052047A (en) * | 2021-03-18 | 2021-06-29 | 北京百度网讯科技有限公司 | Traffic incident detection method, road side equipment, cloud control platform and system |
CN113052047B (en) * | 2021-03-18 | 2023-12-29 | 阿波罗智联(北京)科技有限公司 | Traffic event detection method, road side equipment, cloud control platform and system |
CN113409587A (en) * | 2021-06-16 | 2021-09-17 | 北京字跳网络技术有限公司 | Abnormal vehicle detection method, device, equipment and storage medium |
CN113409587B (en) * | 2021-06-16 | 2022-11-22 | 北京字跳网络技术有限公司 | Abnormal vehicle detection method, device, equipment and storage medium |
WO2022262471A1 (en) * | 2021-06-16 | 2022-12-22 | 北京字跳网络技术有限公司 | Anomalous vehicle detection method and apparatus, device, and storage medium |
CN114004886A (en) * | 2021-10-29 | 2022-02-01 | 中远海运科技股份有限公司 | Camera displacement judging method and system for analyzing high-frequency stable points of image |
CN114004886B (en) * | 2021-10-29 | 2024-04-09 | 中远海运科技股份有限公司 | Camera shift discrimination method and system for analyzing high-frequency stable points of image |
CN114170301A (en) * | 2022-02-09 | 2022-03-11 | 城云科技(中国)有限公司 | Abnormal municipal facility positioning method and device and application thereof |
CN114170301B (en) * | 2022-02-09 | 2022-05-17 | 城云科技(中国)有限公司 | Abnormal municipal facility positioning method and device and application thereof |
CN114821421A (en) * | 2022-04-28 | 2022-07-29 | 南京理工大学 | Traffic abnormal behavior detection method and system |
Also Published As
Publication number | Publication date |
---|---|
CN111832492B (en) | 2024-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111832492B (en) | Static traffic abnormality judging method and device, computer equipment and storage medium | |
JP7272533B2 (en) | Systems and methods for evaluating perceptual systems | |
KR101708547B1 (en) | Event detection apparatus and event detection method | |
US10853949B2 (en) | Image processing device | |
CN113361354B (en) | Track component inspection method and device, computer equipment and storage medium | |
CN110458126B (en) | Pantograph state monitoring method and device | |
CN104486618A (en) | Video image noise detection method and device | |
US20190073538A1 (en) | Method and system for classifying objects from a stream of images | |
CN109472811A (en) | The mask process method of non-object interested | |
CN114758271A (en) | Video processing method, device, computer equipment and storage medium | |
CN104573680A (en) | Image detection method, image detection device and traffic violation detection system | |
CN114120127A (en) | Target detection method, device and related equipment | |
CN113569756A (en) | Abnormal behavior detection and positioning method, system, terminal equipment and readable storage medium | |
CN117351395A (en) | Night scene target detection method, system and storage medium | |
CN111814773A (en) | Lineation parking space identification method and system | |
CN114663347A (en) | Unsupervised object instance detection method and unsupervised object instance detection device | |
KR20230066953A (en) | Method and apparatus for analyzing traffic congestion propagation pattern with multiple cctv videos | |
CN117745709A (en) | Railway foreign matter intrusion detection method, system, equipment and medium | |
CN113052019A (en) | Target tracking method and device, intelligent equipment and computer storage medium | |
CN117333795A (en) | River surface flow velocity measurement method and system based on screening post-treatment | |
CN111914830B (en) | Text line positioning method, device, equipment and system in image | |
CN111639597A (en) | Detection method of flag-raising touring event | |
CN115719362B (en) | High-altitude parabolic detection method, system, equipment and storage medium | |
CN112990350B (en) | Target detection network training method and target detection network-based coal and gangue identification method | |
CN111553408B (en) | Automatic test method for video recognition software |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |