CN116935272B - Video content detection method and device, electronic equipment and storage medium - Google Patents
Video content detection method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN116935272B CN116935272B CN202310857526.0A CN202310857526A CN116935272B CN 116935272 B CN116935272 B CN 116935272B CN 202310857526 A CN202310857526 A CN 202310857526A CN 116935272 B CN116935272 B CN 116935272B
- Authority
- CN
- China
- Prior art keywords
- video
- curve
- distance
- similarity
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 26
- 238000004364 calculation method Methods 0.000 claims abstract description 146
- 238000012545 processing Methods 0.000 claims abstract description 86
- 238000000605 extraction Methods 0.000 claims abstract description 30
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000003709 image segmentation Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000003491 array Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a video content detection method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a video to be detected, wherein the video to be detected comprises a first video and a second video; performing video key frame extraction processing on the video to be detected to obtain video key frames; performing motion characteristic extraction processing on the video key frames to obtain a motion characteristic curve; performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result; and when the video similarity calculation result meets a preset condition, determining that the first video and the second video are repeated content videos. The embodiment of the invention can improve the efficiency of video content detection by carrying out similarity calculation processing on the motion characteristic curve, and can be widely applied to the technical field of video detection.
Description
Technical Field
The present invention relates to the field of video detection technologies, and in particular, to a method and apparatus for detecting video content, an electronic device, and a storage medium.
Background
With the rapid development of networks, the types of videos and the ways of obtaining resources are various, wherein the types of videos such as film and television types, homemade types and the like related to copyrights can have the condition that titles and covers are different under the same or different search conditions but the contents are repeated in a large amount on the network. The existing video detection technology generally adopts manual detection to detect whether the video content is repeated or directly carries out comparison of the abstract algorithm values of the video file for repeated judgment, but the video detection efficiency is lower, and the repeated content after the video format, resolution, color, brightness change and other treatments are difficult to detect.
In view of the foregoing, there is a need for solving the technical problems in the related art.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, an electronic device, and a storage medium for detecting video content, so as to improve the accuracy of video detection.
In one aspect, the present invention provides a video content detection method, including:
acquiring a video to be detected, wherein the video to be detected comprises a first video and a second video;
performing video key frame extraction processing on the video to be detected to obtain video key frames;
performing motion characteristic extraction processing on the video key frames to obtain a motion characteristic curve;
Performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result;
And when the video similarity calculation result meets a preset condition, determining that the first video and the second video are repeated content videos.
Optionally, the performing video key frame extraction processing on the video to be detected to obtain a video key frame includes:
carrying out inter-frame difference processing on two adjacent frames of images which are continuous in time in the video to be detected to obtain an average inter-frame difference image;
and selecting an image frame with the average inter-frame difference local maximum value from the average inter-frame difference image to be determined as a video key frame.
Optionally, the performing motion feature extraction processing on the video key frame to obtain a motion feature curve includes:
performing image segmentation processing on the video key frame to obtain an object contour coordinate array;
Performing distance contrast processing on the video key frames according to the object contour coordinate array to obtain a target object;
extracting the position and time of the target object to obtain a key frame position array and a key frame time array;
And calculating the key frame position array and the key frame time array according to the smooth cubic polynomial interpolation motion curve expression to obtain a motion characteristic curve.
Optionally, the performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result, including:
Performing object contour comparison processing on the first video and the second video to obtain the same object;
Performing similar distance calculation on the same object according to the motion characteristic curve to obtain a similar distance;
and carrying out video similarity calculation according to the same object and the similar distance to obtain a video similarity calculation result.
Optionally, the performing object contour comparison processing on the first video and the second video to obtain the same object includes:
Object contour information extraction processing is carried out on the first video and the second video respectively, so that first video object contour information and second video object contour information are obtained;
and carrying out normalization processing on the first video object contour information and the second video object contour information, and then carrying out comparison processing on the first video object contour information and the second video object contour information to obtain the same object.
Optionally, the calculating the similar distance to the same object according to the motion characteristic curve to obtain a similar distance includes:
acquiring a motion characteristic curve of the same object in the first video, and determining the motion characteristic curve as a first curve;
acquiring a motion characteristic curve of the same object in the second video, and determining the motion characteristic curve as a second curve;
Filling the first curve and the second curve respectively to obtain a first curve position array and a second curve position array;
and carrying out recursive calculation on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance.
Optionally, the performing recursive computation on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance includes:
acquiring a first curve point and a second curve point from the first curve position array, wherein the second curve point is the previous curve point of the first curve point;
Acquiring a third curve point and a fourth curve point from the second curve position array, wherein the fourth curve point is a previous curve point of the third curve point;
performing recursive traversal on the first curve position array and the second curve position array through the first curve point and the third curve point, and performing similarity distance calculation on the first curve point and the third curve point to obtain a similarity distance;
and calculating the similarity distance between the first curve point and the third curve point, wherein the similarity distance calculation comprises the following steps:
Similarity distance calculation is carried out on the second curve point and the fourth curve point, and a first distance is obtained;
similarity distance calculation is carried out on the first curve point and the fourth curve point, and a second distance is obtained;
Performing similarity distance calculation on the second curve point and the third curve point to obtain a third distance;
Comparing the first distance, the second distance and the third distance, and selecting the minimum distance value as a first calculation result;
Carrying out coordinate linear distance calculation on the first curve point and the third curve point to obtain a fourth distance;
And comparing the first calculation result with the fourth distance, and selecting the maximum distance value as a second calculation result.
On the other hand, the embodiment of the invention also provides a video content detection device, which comprises:
The first module is used for acquiring a video to be detected, wherein the video to be detected comprises a first video and a second video;
The second module is used for extracting and processing video key frames of the video to be detected to obtain video key frames;
The third module is used for extracting the motion characteristics of the video key frames to obtain a motion characteristic curve;
The fourth module is used for carrying out similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result;
and a fifth module, configured to determine that the first video and the second video are duplicate content videos when the video similarity calculation result meets a preset condition.
Optionally, the second module is configured to perform video key frame extraction processing on the video to be detected to obtain a video key frame, and includes:
The first unit is used for carrying out inter-frame difference processing on two adjacent frames of images which are continuous in time in the video to be detected to obtain an average inter-frame difference image;
And a second unit, configured to select an image frame with an average inter-frame difference local maximum value from the average inter-frame difference image, and determine the image frame as a video key frame.
Optionally, the third module is configured to perform motion feature extraction processing on the video keyframe to obtain a motion feature curve, and includes:
The third unit is used for carrying out image segmentation processing on the video key frames to obtain an object contour coordinate array;
A fourth unit, configured to perform distance comparison processing on the video keyframe according to the object profile coordinate array to obtain a target object;
A fifth unit, configured to extract the position and time of the target object to obtain a key frame position array and a key frame time array;
And a sixth unit, configured to calculate the key frame position array and the key frame time array according to a smooth cubic polynomial interpolation motion curve expression, so as to obtain a motion characteristic curve.
Optionally, the fourth module is configured to perform similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result, and includes:
A seventh unit, configured to perform object contour comparison processing on the first video and the second video to obtain the same object;
an eighth unit, configured to perform similar distance calculation on the same object according to the motion characteristic curve, to obtain a similar distance;
and a ninth unit, configured to perform video similarity calculation according to the same object and the similar distance, to obtain a video similarity calculation result.
Optionally, the seventh unit is configured to perform object contour comparison processing on the first video and the second video to obtain the same object, and includes:
the first subunit is used for respectively extracting object contour information of the first video and the second video to obtain first video object contour information and second video object contour information;
And the second subunit is used for carrying out normalization processing on the first video object contour information and the second video object contour information, and then carrying out comparison processing on the first video object contour information and the second video object contour information to obtain the same object.
Optionally, the eighth unit is configured to perform similar distance calculation on the same object according to the motion characteristic curve, to obtain a similar distance, and includes:
a third subunit, configured to acquire a motion characteristic curve of the same object in the first video, and determine the motion characteristic curve as a first curve;
a fourth subunit, configured to acquire a motion characteristic curve of the same object in the second video, and determine the motion characteristic curve as a second curve;
a fifth subunit, configured to fill the first curve and the second curve respectively, to obtain a first curve position array and a second curve position array;
and the sixth subunit is used for carrying out recursive calculation processing on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance.
The sixth subunit is configured to perform recursive computation on the first curve position array and the second curve position array according to a similarity distance calculation formula, to obtain a similarity distance, and includes:
acquiring a first curve point and a second curve point from the first curve position array, wherein the second curve point is the previous curve point of the first curve point;
Acquiring a third curve point and a fourth curve point from the second curve position array, wherein the fourth curve point is a previous curve point of the third curve point;
performing recursive traversal on the first curve position array and the second curve position array through the first curve point and the third curve point, and performing similarity distance calculation on the first curve point and the third curve point to obtain a similarity distance;
and calculating the similarity distance between the first curve point and the third curve point, wherein the similarity distance calculation comprises the following steps:
Similarity distance calculation is carried out on the second curve point and the fourth curve point, and a first distance is obtained;
similarity distance calculation is carried out on the first curve point and the fourth curve point, and a second distance is obtained;
Performing similarity distance calculation on the second curve point and the third curve point to obtain a third distance;
Comparing the first distance, the second distance and the third distance, and selecting the minimum distance value as a first calculation result;
Carrying out coordinate linear distance calculation on the first curve point and the third curve point to obtain a fourth distance;
And comparing the first calculation result with the fourth distance, and selecting the maximum distance value as a second calculation result.
On the other hand, the embodiment of the invention also discloses electronic equipment, which comprises a processor and a memory;
the memory is used for storing programs;
The processor executes the program to implement the method as described above.
In another aspect, embodiments of the present invention also disclose a computer readable storage medium storing a program for execution by a processor to implement a method as described above.
In another aspect, embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the foregoing method.
Compared with the prior art, the technical scheme provided by the invention has the following technical effects: according to the embodiment of the invention, the motion characteristic curve is obtained by carrying out motion characteristic extraction processing on the video key frame, and then the similarity calculation processing is carried out on the video to be detected according to the motion characteristic curve, so that the video similarity calculation result is obtained; the video similarity can be calculated through the motion characteristic curve, and repeated contents after processing of video formats, resolution, color, brightness changes and the like can be detected.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an implementation environment of a video content detection method according to an embodiment of the present application;
Fig. 2 is a flowchart of a method for detecting video content according to an embodiment of the present application;
fig. 3 is a specific flowchart of a video content detection method according to an embodiment of the present application;
fig. 4 is a flowchart of step S302 in fig. 3;
Fig. 5 is a flowchart of step S303 in fig. 3;
fig. 6 is a flowchart of step S304 in fig. 3;
FIG. 7 is a schematic diagram of similarity distance calculation according to an embodiment of the present application;
FIG. 8 is a timing diagram of one implementation provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of a video content detection apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In the related art, in the video repetition detection technology, whether the video content is repeated or not is generally detected manually or the comparison of the summary algorithm values of the video file is directly carried out for repeated judgment, or the comparison of the similarity of cosine, hash algorithm, histogram and the like is carried out on the picture after the frame picture of each second of the video is extracted, and the video repetition degree is judged according to the similarity of the final video obtained by the fixed algorithm. However, whether the manual identification of the video is repeated is inefficient; comparing the video file abstract algorithm with the repetition that the video format, resolution, brightness and the like are different and the content is the same and can not be detected; the contrast method converted into the picture similarity only considers the similarity among the pictures of frames per second, can not detect the repetition of the processed pictures due to nonlinear change of the colors and the like, and has lower video content detection efficiency.
In order to solve the problems in the related art, embodiments of the present invention provide a video content detection method, apparatus, electronic device, and storage medium, where in the video content detection method, a video to be detected is obtained, where the video to be detected includes a first video and a second video; performing video key frame extraction processing on the video to be detected to obtain video key frames; performing motion characteristic extraction processing on the video key frames to obtain a motion characteristic curve; performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result; and when the video similarity calculation result meets a preset condition, determining that the first video and the second video are repeated content videos. According to the embodiment of the invention, the video repeatability is obtained by extracting the object motion characteristics in the video and performing comparison calculation so as to determine whether the video content is repeated or not, so that the accuracy of video content detection is improved.
Fig. 1 is a schematic diagram of an implementation environment of a video content detection method according to an embodiment of the present application. Referring to fig. 1, the software and hardware main body of the implementation environment mainly includes a video object motion feature extractor 101 and a video repeatability calculator 102, where the video object motion feature extractor 101 is used for inputting a video file, extracting video key frames, performing image segmentation, performing the same object estimation extraction in an image, and extracting motion vector features of the same object; the video repetition calculator 102 is configured to perform feature similarity calculation on video segments according to motion vector features of different objects in the video, and perform feature similarity comprehensive calculation on different video segments.
Fig. 2 is a flowchart of a video content detection method provided in the embodiment of the application in the above implementation environment, where a video to be detected and a reference video are input, a video object motion feature extractor performs object extraction and object motion feature curve generation on the video, and a video repeatability calculator performs contour similarity comparison and contour similarity calculation on a plurality of objects in different videos according to the result of the video object motion feature extractor, and finally calculates and outputs a video similarity result to determine whether the video is repeated.
Referring to fig. 3, an embodiment of the present invention provides a video content detection method, including:
s301, acquiring a video to be detected, wherein the video to be detected comprises a first video and a second video;
S302, video key frame extraction processing is carried out on the video to be detected, and video key frames are obtained;
s303, performing motion feature extraction processing on the video key frames to obtain motion feature curves;
S304, performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result;
And S305, when the video similarity calculation result meets a preset condition, determining that the first video and the second video are repeated content videos.
In the embodiment of the invention, firstly, a video to be detected is acquired, wherein the video to be detected comprises a first video and a second video. The first video is a comparison video, the second video is a reference video, and the first video can be used as the reference video, and the second video can be used as the comparison video. Extracting video key frames of the video to be detected to obtain video key frame pictures in the video to be detected, wherein the video key frames comprise video key frames of a first video and video key frames of a second video, and the video key frames are used for representing video frames with obvious changes of video pictures. And performing motion characteristic extraction processing on the video key frames, and respectively performing motion characteristic extraction on similar objects in the video key frames of the first video and the second video to obtain motion characteristic curves, wherein the motion characteristic curves comprise a first video motion characteristic curve and a second video motion characteristic curve. Performing similarity calculation processing on the video to be detected according to the motion characteristic curve, and obtaining a video similarity calculation result by calculating the similarity of the first video motion characteristic curve and the second video motion characteristic curve; and when the video similarity calculation result meets a preset condition, determining that the first video and the second video are repeated content videos.
Further as an optional embodiment, referring to fig. 4, in step S302, the performing video key frame extraction processing on the video to be detected to obtain a video key frame includes:
s401, carrying out inter-frame difference processing on two adjacent frames of images which are continuous in time in the video to be detected to obtain an average inter-frame difference image;
S402, selecting an image frame with the average inter-frame difference local maximum value from the average inter-frame difference image to be determined as a video key frame.
In the embodiment of the invention, the adjacent two frames of images which are continuous in time in the video to be detected are subjected to inter-frame difference processing, and the first video is taken as an example, and the adjacent two frames of images which are continuous in time in the first video are sequentially subjected to corresponding pixel point gray value subtraction absolute value to obtain the average inter-frame difference image. And then selecting the original frame with the average inter-frame difference local maximum value as a key frame of the video to obtain the video key frame.
Further as an optional embodiment, referring to fig. 5, in step S303, the performing motion feature extraction processing on the video key frame to obtain a motion feature curve includes:
s501, performing image segmentation processing on the video key frame to obtain an object contour coordinate array;
s502, performing distance comparison processing on the video key frames according to the object contour coordinate array to obtain a target object;
s503, extracting the position and time of the target object to obtain a key frame position array and a key frame time array;
S504, calculating the key frame position array and the key frame time array according to the smooth cubic polynomial interpolation motion curve expression to obtain a motion characteristic curve.
In the embodiment of the invention, the video key frames in the video to be detected are respectively subjected to motion feature extraction processing to obtain a motion feature curve, and the first video is taken as an example. And carrying out image segmentation processing on the video key frames of the first video, carrying out a histogram-based image segmentation method on each video key frame in the first video to obtain the object contour of the object in each video key frame, and storing the continuous sequential coordinate value numbers of the object contour to obtain an object contour coordinate array. And then, performing distance comparison processing on a plurality of video key frames in the first video according to the object contour coordinate arrays, and determining whether the object is the same object or not by calculating the distance between each coordinate in a plurality of object contour edge coordinate arrays in the continuous video key frames to obtain the target object. The position and time of the target object are extracted, and the time array of the target object in the video key frame is stored sequentially, such as T i=[t0,t1,…tn-1,tn, n represents the number of times the target object appears in the first video, T n represents the time the target object appears in the first video, and T i represents the key time array. And based on setting the left lower corner of the picture as an origin 0, extracting and storing the central position array of the target object in different key frame pictures as follows: p i=[p0,p1,...pn-1,pn],pn represents the position of the target object in the first video, and P i represents a key time array. Dividing the first and last point speeds into: the speed of the remaining points is calculated from the current position and the last position distance and time difference, except v 0=vn = 0. And calculating the key frame position array and the key frame time array according to the smooth cubic polynomial interpolation motion curve expression to obtain a motion characteristic curve. The method comprises the steps of calculating two continuous motion coordinates p n-1 and p n according to a smooth cubic polynomial interpolation motion curve expression, wherein the smooth cubic polynomial interpolation motion curve expression is as follows:
Q(t)=a0+a1(t-t0)+a2(t-t0)2+a3(t-t0)3t0≤t≤tn
Where a 0,a1,a2,a3 is the parameter to be determined, let h=p n-pn-1,W=tn-tn-1, Then two adjacent points are calculated using the above formula substituting the following parameters:
Taking the continuous motion coordinate point p 0 and the point p 1 as examples, a motion point formula between the points p 0 to p 1 is as follows:
All the motion points between the object positions of all the other continuous two key frame pictures can be brought into respective parameters to be calculated by a smooth cubic polynomial interpolation motion curve expression, so as to obtain a motion characteristic curve.
Further as an optional implementation manner, referring to fig. 6, in step S304, the performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result includes:
S601, performing object contour comparison processing on the first video and the second video to obtain the same object;
S602, carrying out similar distance calculation on the same object according to the motion characteristic curve to obtain a similar distance;
And S603, performing video similarity calculation according to the same object and the similar distance to obtain a video similarity calculation result.
In the embodiment of the invention, object contour comparison processing is carried out on the first video and the second video to obtain the same object. It should be noted that, the performing distance comparison processing on the video key frames according to the object profile coordinate array to obtain the target object refers to finding the same object in the video key frames in the first video and the second video as the target object, and comparing the first video and the second video to find the same object in the first video and the second video. And calculating the similar distance of the same object according to the motion characteristic curve to obtain the similar distance, and calculating the similar distance through the motion characteristic curve of the same object in the first video and the characteristic curve of the same object in the second video to obtain the similar distance. Video similarity calculation is carried out according to the same object and the similar distance to obtain a video similarity calculation result, wherein the video similarity calculation result comprises the following steps: by counting a plurality of results such as the number of all objects in the first video, the number of similar object outlines, the similar distances between the motion characteristic curves of the similar objects and the like. According to the principle that the more similar objects are, the smaller the similar distance of the motion characteristic curve of the similar objects is, and the higher the similarity of video contents is, let the number of extracted objects in the first video be a, the number of objects with similar outlines with the second video be c (c < a), the motion characteristic curve similar distance of each similar outline object be [ S 1,s2…,sc ], and then the calculation formula of the final video similarity calculation result S is as follows:
And (3) initially taking a repetition threshold value of 0.7, and when the video similarity calculation result S is more than 0.7, obtaining a repeated video, wherein the repetition threshold value can be optimized according to a test result.
Further optionally, in step S601, the performing object contour comparison processing on the first video and the second video to obtain the same object includes:
Object contour information extraction processing is carried out on the first video and the second video respectively, so that first video object contour information and second video object contour information are obtained;
and carrying out normalization processing on the first video object contour information and the second video object contour information, and then carrying out comparison processing on the first video object contour information and the second video object contour information to obtain the same object.
In the embodiment of the invention, object contour information extraction processing is respectively carried out on the first video and the second video to obtain first video object contour information and second video object contour information. And carrying out normalization processing on the first video object contour information and the second video object contour information, uniformly reducing the object contour to 32x 32 after the side with the longest rotating contour is the bottom side, comparing the object contours in the two video object motion segments, and determining whether the object is the same object according to a set threshold value. And comparing the distance between each coordinate in the edge coordinate arrays of the profiles of the plurality of objects in the first video and the second video to determine whether the objects are the same object, comparing the calculated distance value of the obtained coordinates with a set threshold value, and determining the objects to be unified objects when the calculated distance value of the obtained coordinates is less than or equal to the preset threshold value to obtain the same objects in the first video and the second video, wherein the preset distance value can be set according to actual conditions.
Further optionally, in step S602, the calculating the similar distance to the same object according to the motion characteristic curve to obtain the similar distance includes:
acquiring a motion characteristic curve of the same object in the first video, and determining the motion characteristic curve as a first curve;
acquiring a motion characteristic curve of the same object in the second video, and determining the motion characteristic curve as a second curve;
Filling the first curve and the second curve respectively to obtain a first curve position array and a second curve position array;
and carrying out recursive calculation on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance.
In the embodiment of the invention, a motion characteristic curve of the same object in the first video is acquired and is determined as a first curve; acquiring a motion characteristic curve of the same object in the second video, and determining the motion characteristic curve as a second curve; and filling the first curve and the second curve respectively to obtain a first curve position array and a second curve position array. The method comprises the following steps of: and (3) calculating two continuous motion coordinates P n-1 and P n in the P i=[p0,p1,…pn-1,pn, calculating points on a plurality of motion curves between an original array point P n-1 and a point P n as part of the motion characteristic curve, supplementing and inserting the points into the middle of the corresponding points of the original position array to generate a point position array P= [ P 0,p1,…pn-1,pn ] of the complete motion characteristic curve, and performing the same operation on the second video to obtain a related point array D= [ D 0,d1,…dm-1,dm ]). And carrying out recursive calculation on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance.
Further, as a preferred embodiment, the performing, according to a similarity distance calculation formula, a recursive calculation on the first curve position array and the second curve position array to obtain a similarity distance includes:
acquiring a first curve point and a second curve point from the first curve position array, wherein the second curve point is the previous curve point of the first curve point;
Acquiring a third curve point and a fourth curve point from the second curve position array, wherein the fourth curve point is a previous curve point of the third curve point;
performing recursive traversal on the first curve position array and the second curve position array through the first curve point and the third curve point, and performing similarity distance calculation on the first curve point and the third curve point to obtain a similarity distance;
and calculating the similarity distance between the first curve point and the third curve point, wherein the similarity distance calculation comprises the following steps:
Similarity distance calculation is carried out on the second curve point and the fourth curve point, and a first distance is obtained;
similarity distance calculation is carried out on the first curve point and the fourth curve point, and a second distance is obtained;
Performing similarity distance calculation on the second curve point and the third curve point to obtain a third distance;
Comparing the first distance, the second distance and the third distance, and selecting the minimum distance value as a first calculation result;
Carrying out coordinate linear distance calculation on the first curve point and the third curve point to obtain a fourth distance;
And comparing the first calculation result with the fourth distance, and selecting the maximum distance value as a second calculation result.
In the embodiment of the present invention, referring to fig. 7, a first curve point and a second curve point are obtained from the first curve position array, where the second curve point is a previous curve point of the first curve point; acquiring a third curve point and a fourth curve point from the second curve position array, wherein the fourth curve point is a previous curve point of the third curve point; performing recursive traversal on the first curve position array and the second curve position array through the first curve point and the third curve point, selecting the first curve point as an end point of the first curve, selecting the second curve point as an end point of the second curve, and performing similarity distance calculation on the first curve point and the third curve point, wherein a similarity distance calculation formula is as follows:
T(n,m)=max(min(T(n-1,m-1),T(n,m-1),T(n-1,m)),pndm);
T(0,0)=p0d0;T(1,0)=p1d0;T(0,1)=p0d1;
Where p ndm represents the coordinate straight line distance of point p n in the first curve and point d m in the second curve.
Obtaining a first distance by carrying out similarity distance calculation on the second curve point and the fourth curve point; similarity distance calculation is carried out on the first curve point and the fourth curve point, and a second distance is obtained; performing similarity distance calculation on the second curve point and the third curve point to obtain a third distance; comparing the first distance, the second distance and the third distance, and selecting the minimum distance value as a first calculation result; carrying out coordinate linear distance calculation on the first curve point and the third curve point to obtain a fourth distance; and comparing the first calculation result with the fourth distance, and selecting the maximum distance value as a second calculation result. Taking the example of the similarity distance between the point P 1 of the first curve P and the curve D as the example, the formula :T(1,1)=max(min(p0d0,p0d1,d0p1),p1d1), is to determine the minimum value between the points P 0d0,p0d1,d0p1 first, and then take the maximum value of the comparison between the result and the point P 1d1, as shown in fig. 7, the distance between the points P 1d1 should be taken as the matching distance of the points P 1. Finally, matching calculation is carried out on each point of the point position array of the curve P and the point position array of the D through recursive calculation, and the obtained value of T (n, m) is the maximum difference value of the two final motion characteristic curves, namely the similar distance of the motion characteristic curves.
Referring to fig. 8, an implementation manner of the embodiment of the present invention is to extract a video key frame picture according to an uploaded video to be detected and a reference video; carrying out image segmentation on each key frame picture to extract an object contour; presuming the same object in continuous key frame pictures in the same video, de-duplicating object data, simultaneously acquiring a motion point array in continuous frame pictures of the same object, and calculating the corresponding speed of each point according to the time of all points; rotating, scaling, and other normalization processing is carried out on all objects in the two videos; judging whether objects with similar outlines exist in the two videos or not; respectively generating three polynomial interpolation motion characteristic curves of objects with similar outlines in two videos; calculating the similar distance of a cubic polynomial interpolation motion characteristic curve of an object with similar outline in two videos respectively; calculating the similarity of the final video according to the number of the similar outline objects and the similarity distance of the motion characteristic curves of the similar outline objects; and outputting the similarity and judging whether the video is repeated.
Referring to fig. 9, an embodiment of the present invention further provides a video content detection apparatus, including:
the first module 901 is configured to obtain a video to be detected, where the video to be detected includes a first video and a second video;
A second module 902, configured to perform video key frame extraction processing on the video to be detected to obtain a video key frame;
A third module 903, configured to perform motion feature extraction processing on the video key frame to obtain a motion feature curve;
A fourth module 904, configured to perform similarity calculation processing on the video to be detected according to the motion characteristic curve, so as to obtain a video similarity calculation result;
A fifth module 905 is configured to determine that the first video and the second video are duplicate content videos when the video similarity calculation result meets a preset condition.
Referring to fig. 10, an embodiment of the present invention further provides an electronic device including a processor 1002 and a memory 1001; the memory is used for storing programs; the processor executes the program to implement the method as described above.
Corresponding to the method of fig. 1, an embodiment of the present invention also provides a computer-readable storage medium storing a program to be executed by a processor to implement the method as described above.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the method shown in fig. 1.
In summary, the embodiment of the invention has the following advantages: according to the embodiment of the invention, the generation of the motion characteristic curve of the same object containing continuous speed in the video key frame is realized through the generation of the cubic polynomial interpolation motion characteristic curve of the video object, and the similar distance calculation is carried out according to the generated object motion vector characteristic curve, so that the similar distance calculation of two motion characteristic curves which are possibly misaligned is realized. According to the embodiment of the invention, the video similarity is comprehensively calculated by carrying out video similarity comprehensive calculation on the similar distances of the motion characteristic curves of a plurality of similar objects in two videos, so that the accuracy and the efficiency of video content detection are improved.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the invention is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the described functions and/or features may be integrated in a single physical device and/or software module or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the embodiments described above, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present application, and these equivalent modifications or substitutions are included in the scope of the present application as defined in the appended claims.
Claims (6)
1. A method for detecting video content, the method comprising:
acquiring a video to be detected, wherein the video to be detected comprises a first video and a second video;
performing video key frame extraction processing on the video to be detected to obtain video key frames;
performing motion characteristic extraction processing on the video key frames to obtain a motion characteristic curve;
Performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result;
When the video similarity calculation result meets a preset condition, determining that the first video and the second video are repeated content videos;
the step of extracting the motion characteristic of the video key frame to obtain a motion characteristic curve comprises the following steps:
performing image segmentation processing on the video key frame to obtain an object contour coordinate array;
Performing distance contrast processing on the video key frames according to the object contour coordinate array to obtain a target object;
extracting the position and time of the target object to obtain a key frame position array and a key frame time array;
calculating the key frame position array and the key frame time array according to a smooth cubic polynomial interpolation motion curve expression to obtain a motion characteristic curve;
and performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result, wherein the method comprises the following steps of:
Performing object contour comparison processing on the first video and the second video to obtain the same object;
Performing similar distance calculation on the same object according to the motion characteristic curve to obtain a similar distance;
Video similarity calculation is carried out according to the same object and the similar distance, and a video similarity calculation result is obtained;
The step of calculating the similar distance of the same object according to the motion characteristic curve to obtain the similar distance comprises the following steps:
acquiring a motion characteristic curve of the same object in the first video, and determining the motion characteristic curve as a first curve;
acquiring a motion characteristic curve of the same object in the second video, and determining the motion characteristic curve as a second curve;
Filling the first curve and the second curve respectively to obtain a first curve position array and a second curve position array;
performing recursive calculation on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance;
the step of performing recursive calculation processing on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance includes:
acquiring a first curve point and a second curve point from the first curve position array, wherein the second curve point is the previous curve point of the first curve point;
Acquiring a third curve point and a fourth curve point from the second curve position array, wherein the fourth curve point is a previous curve point of the third curve point;
performing recursive traversal on the first curve position array and the second curve position array through the first curve point and the third curve point, and performing similarity distance calculation on the first curve point and the third curve point to obtain a similarity distance;
and calculating the similarity distance between the first curve point and the third curve point, wherein the similarity distance calculation comprises the following steps:
Similarity distance calculation is carried out on the second curve point and the fourth curve point, and a first distance is obtained;
similarity distance calculation is carried out on the first curve point and the fourth curve point, and a second distance is obtained;
Performing similarity distance calculation on the second curve point and the third curve point to obtain a third distance;
Comparing the first distance, the second distance and the third distance, and selecting the minimum distance value as a first calculation result;
Carrying out coordinate linear distance calculation on the first curve point and the third curve point to obtain a fourth distance;
And comparing the first calculation result with the fourth distance, and selecting the maximum distance value as a second calculation result.
2. The method according to claim 1, wherein the performing video key frame extraction on the video to be detected to obtain a video key frame includes:
carrying out inter-frame difference processing on two adjacent frames of images which are continuous in time in the video to be detected to obtain an average inter-frame difference image;
and selecting an image frame with the average inter-frame difference local maximum value from the average inter-frame difference image to be determined as a video key frame.
3. The method of claim 1, wherein performing object contour contrast processing on the first video and the second video to obtain the same object comprises:
Object contour information extraction processing is carried out on the first video and the second video respectively, so that first video object contour information and second video object contour information are obtained;
and carrying out normalization processing on the first video object contour information and the second video object contour information, and then carrying out comparison processing on the first video object contour information and the second video object contour information to obtain the same object.
4. A video content detection apparatus, the apparatus comprising:
The first module is used for acquiring a video to be detected, wherein the video to be detected comprises a first video and a second video;
The second module is used for extracting and processing video key frames of the video to be detected to obtain video key frames;
The third module is used for extracting the motion characteristics of the video key frames to obtain a motion characteristic curve;
The fourth module is used for carrying out similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result;
A fifth module, configured to determine that the first video and the second video are duplicate content videos when the video similarity calculation result meets a preset condition;
The third module is configured to perform motion feature extraction processing on the video keyframe to obtain a motion feature curve, and includes:
performing image segmentation processing on the video key frame to obtain an object contour coordinate array;
Performing distance contrast processing on the video key frames according to the object contour coordinate array to obtain a target object;
extracting the position and time of the target object to obtain a key frame position array and a key frame time array;
calculating the key frame position array and the key frame time array according to a smooth cubic polynomial interpolation motion curve expression to obtain a motion characteristic curve;
and performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result, wherein the method comprises the following steps of:
Performing object contour comparison processing on the first video and the second video to obtain the same object;
Performing similar distance calculation on the same object according to the motion characteristic curve to obtain a similar distance;
Video similarity calculation is carried out according to the same object and the similar distance, and a video similarity calculation result is obtained;
The step of calculating the similar distance of the same object according to the motion characteristic curve to obtain the similar distance comprises the following steps:
acquiring a motion characteristic curve of the same object in the first video, and determining the motion characteristic curve as a first curve;
acquiring a motion characteristic curve of the same object in the second video, and determining the motion characteristic curve as a second curve;
Filling the first curve and the second curve respectively to obtain a first curve position array and a second curve position array;
performing recursive calculation on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance;
the step of performing recursive calculation processing on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance includes:
acquiring a first curve point and a second curve point from the first curve position array, wherein the second curve point is the previous curve point of the first curve point;
Acquiring a third curve point and a fourth curve point from the second curve position array, wherein the fourth curve point is a previous curve point of the third curve point;
performing recursive traversal on the first curve position array and the second curve position array through the first curve point and the third curve point, and performing similarity distance calculation on the first curve point and the third curve point to obtain a similarity distance;
and calculating the similarity distance between the first curve point and the third curve point, wherein the similarity distance calculation comprises the following steps:
Similarity distance calculation is carried out on the second curve point and the fourth curve point, and a first distance is obtained;
similarity distance calculation is carried out on the first curve point and the fourth curve point, and a second distance is obtained;
Performing similarity distance calculation on the second curve point and the third curve point to obtain a third distance;
Comparing the first distance, the second distance and the third distance, and selecting the minimum distance value as a first calculation result;
Carrying out coordinate linear distance calculation on the first curve point and the third curve point to obtain a fourth distance;
And comparing the first calculation result with the fourth distance, and selecting the maximum distance value as a second calculation result.
5. An electronic device comprising a memory and a processor;
the memory is used for storing programs;
the processor executing the program implements the method of any one of claims 1 to 3.
6. A computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the method of any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310857526.0A CN116935272B (en) | 2023-07-12 | 2023-07-12 | Video content detection method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310857526.0A CN116935272B (en) | 2023-07-12 | 2023-07-12 | Video content detection method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116935272A CN116935272A (en) | 2023-10-24 |
CN116935272B true CN116935272B (en) | 2024-05-28 |
Family
ID=88385608
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310857526.0A Active CN116935272B (en) | 2023-07-12 | 2023-07-12 | Video content detection method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116935272B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102779184A (en) * | 2012-06-29 | 2012-11-14 | 中国科学院自动化研究所 | Automatic positioning method of approximately repeated video clips |
CN113313065A (en) * | 2021-06-23 | 2021-08-27 | 北京奇艺世纪科技有限公司 | Video processing method and device, electronic equipment and readable storage medium |
CN113496187A (en) * | 2020-09-22 | 2021-10-12 | 华扬联众数字技术股份有限公司 | Video matching method and device based on video fingerprints |
CN115346145A (en) * | 2021-05-13 | 2022-11-15 | 北京字跳网络技术有限公司 | Method, device, storage medium and computer program product for identifying repeated video |
CN115471772A (en) * | 2022-09-16 | 2022-12-13 | 中国农业银行股份有限公司 | Method, device, equipment and medium for extracting key frame |
CN116188815A (en) * | 2022-12-12 | 2023-05-30 | 北京数美时代科技有限公司 | Video similarity detection method, system, storage medium and electronic equipment |
CN116343080A (en) * | 2023-02-20 | 2023-06-27 | 华南理工大学 | Dynamic sparse key frame video target detection method, device and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007053112A1 (en) * | 2005-11-07 | 2007-05-10 | Agency For Science, Technology And Research | Repeat clip identification in video data |
-
2023
- 2023-07-12 CN CN202310857526.0A patent/CN116935272B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102779184A (en) * | 2012-06-29 | 2012-11-14 | 中国科学院自动化研究所 | Automatic positioning method of approximately repeated video clips |
CN113496187A (en) * | 2020-09-22 | 2021-10-12 | 华扬联众数字技术股份有限公司 | Video matching method and device based on video fingerprints |
CN115346145A (en) * | 2021-05-13 | 2022-11-15 | 北京字跳网络技术有限公司 | Method, device, storage medium and computer program product for identifying repeated video |
CN113313065A (en) * | 2021-06-23 | 2021-08-27 | 北京奇艺世纪科技有限公司 | Video processing method and device, electronic equipment and readable storage medium |
CN115471772A (en) * | 2022-09-16 | 2022-12-13 | 中国农业银行股份有限公司 | Method, device, equipment and medium for extracting key frame |
CN116188815A (en) * | 2022-12-12 | 2023-05-30 | 北京数美时代科技有限公司 | Video similarity detection method, system, storage medium and electronic equipment |
CN116343080A (en) * | 2023-02-20 | 2023-06-27 | 华南理工大学 | Dynamic sparse key frame video target detection method, device and storage medium |
Non-Patent Citations (2)
Title |
---|
Effective fake news video detection using domain knowledge and multimodal data fusion on youtube;Hyewon Choi et al;《Pattern Recognition Letters》;44-52 * |
基于内容的视频结构化方法研究_袁祉赟2017年第02期;袁祉赟;《中国优秀硕士学位论文全文数据库(电子期刊)》;第2017卷(第02期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN116935272A (en) | 2023-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9396569B2 (en) | Digital image manipulation | |
KR100901904B1 (en) | Video content understanding through real time video motion analysis | |
US11189022B2 (en) | Automatic detection, counting, and measurement of logs using a handheld device | |
CN110599486A (en) | Method and system for detecting video plagiarism | |
US11145080B2 (en) | Method and apparatus for three-dimensional object pose estimation, device and storage medium | |
Hua et al. | Similarity measure for image resizing using SIFT feature | |
JP2015520467A (en) | Apparatus and method for color harmonization of images | |
Fu et al. | Quality assessment of retargeted images using hand-crafted and deep-learned features | |
CN108960012B (en) | Feature point detection method and device and electronic equipment | |
CN113411582A (en) | Video coding method, system, device and medium based on active contour | |
CN113542868A (en) | Video key frame selection method and device, electronic equipment and storage medium | |
Zhao et al. | Efficient image decolorization with a multimodal contrast-preserving measure | |
EP2536123B1 (en) | Image processing method and image processing apparatus | |
CN108764343B (en) | Method for positioning tracking target frame in tracking algorithm | |
Gimenez et al. | Unsupervised edge map scoring: A statistical complexity approach | |
CN116935272B (en) | Video content detection method and device, electronic equipment and storage medium | |
Mukherjee et al. | A hybrid algorithm for disparity calculation from sparse disparity estimates based on stereo vision | |
Qu et al. | An algorithm of image mosaic based on binary tree and eliminating distortion error | |
CN107704864A (en) | Well-marked target detection method based on image object Semantic detection | |
Le et al. | SpatioTemporal utilization of deep features for video saliency detection | |
US7386169B2 (en) | Method for edge detection and contour stroke generation | |
CN112085683B (en) | Depth map credibility detection method in saliency detection | |
JP2009271657A5 (en) | ||
CN116188535A (en) | Video tracking method, device, equipment and storage medium based on optical flow estimation | |
JP2013182416A (en) | Feature amount extraction device, feature amount extraction method, and feature amount extraction program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |