CN110598795A - Image difference detection method and device, storage medium and terminal - Google Patents
Image difference detection method and device, storage medium and terminal Download PDFInfo
- Publication number
- CN110598795A CN110598795A CN201910876627.6A CN201910876627A CN110598795A CN 110598795 A CN110598795 A CN 110598795A CN 201910876627 A CN201910876627 A CN 201910876627A CN 110598795 A CN110598795 A CN 110598795A
- Authority
- CN
- China
- Prior art keywords
- test
- image
- map
- contour
- chart
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
An image difference detection method and device, a storage medium and a terminal are provided, wherein the image difference detection method comprises the following steps: acquiring a reference graph and a test graph; converting the reference image to enable the shooting visual angle of the converted reference image to be consistent with the shooting visual angle of the test image; translating the converted reference image, and performing image frame difference operation on the translated reference image and the converted reference image to obtain a reference contour image; translating the test chart, and carrying out image frame difference operation on the translated test chart and the test chart to obtain a test contour chart; and for each nonzero pixel point in the test contour map, determining the position coordinate of the nonzero pixel point, and matching the pixel point near the position coordinate in the reference contour map with the nonzero pixel point to obtain a matching result. The technical scheme of the invention can accurately realize the detection of the content difference of different shot pictures.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image difference detection method and apparatus, a storage medium, and a terminal.
Background
In the field of image recognition, there are many Feature recognition algorithms and Feature comparison algorithms, such as Scale-Invariant Feature Transform (SIFT) and speeded Up version SIFT (speedup robust Feature, SURF), which can match and recognize whether the landscapes of 2 pictures are the same, but cannot be used to calculate the difference between the pictures.
In the field of image quality detection, there is no accurate and effective algorithm for comparing the picture difference of different shooting visual angles.
Disclosure of Invention
The invention solves the technical problem of how to accurately detect the content difference of different shot pictures.
In order to solve the above technical problem, an embodiment of the present invention provides an image difference detection method, where the image difference detection method includes: acquiring a reference image and a test image, wherein the reference image and the test image are obtained by shooting the same object, and the reference image is a picture consistent with the shot object; converting the reference image to enable the shooting visual angle of the converted reference image to be consistent with the shooting visual angle of the test image; translating the converted reference map, and performing image frame difference operation on the translated reference map and the converted reference map to obtain a reference contour map, wherein the reference contour map represents the contour of the reference map; translating the test chart, and carrying out image frame difference operation on the translated test chart and the test chart to obtain a test contour chart, wherein the test contour chart represents the contour of the test chart; and for each nonzero pixel point in the test contour map, determining the position coordinate of the nonzero pixel point, and matching the pixel point near the position coordinate in the reference contour map with the nonzero pixel point to obtain a matching result.
Optionally, the converting the reference map includes: carrying out characteristic point detection and matching on the reference graph and the test graph to obtain matched characteristic point pairs; calculating a transformation matrix transformed from the characteristic points in the reference graph to the matched characteristic points in the test graph by using the matched characteristic point pairs; and multiplying the transformation matrix and the reference image to obtain a converted reference image.
Optionally, the matching result includes a pixel point of the reference profile that is not matched with each non-zero pixel point in the test profile, and the method further includes: calculating the number of the unmatched pixel points; and outputting the number and the matching result.
Optionally, the translating the converted reference map includes: translating the converted reference map in a first direction and a second direction respectively, wherein the first direction is selected from up and down, and the second direction is selected from left and right; the translating the test chart comprises: translating the post-conversion test pattern in the first direction and the second direction, respectively.
Optionally, the matching the pixel point near the position coordinate in the reference contour map with the non-zero pixel point includes: determining a matching window with a preset size in the reference contour map by taking the position coordinate in the reference contour map as a center; and searching pixel points matched with the nonzero pixel points in the matching window.
Optionally, the searching for the pixel point matched with the non-zero pixel point in the matching window includes: and if the difference value between the pixel value of the pixel point on each channel and the pixel value of the nonzero pixel point on the corresponding channel is smaller than a preset threshold value, determining that the pixel point is matched with the nonzero pixel point.
Optionally, the distance for translating the converted reference image and translating the test image is two pixels.
In order to solve the above technical problem, an embodiment of the present invention further discloses an image difference detection apparatus, including: the image acquisition module is used for acquiring a reference image and a test image, wherein the reference image and the test image are obtained by shooting the same object, and the reference image is a picture consistent with the shot object; the conversion module is used for converting the reference image so as to enable the shooting visual angle of the converted reference image to be consistent with the shooting visual angle of the test image; the reference contour map calculation module is used for translating the converted reference map and carrying out image frame difference operation on the translated reference map and the converted reference map to obtain a reference contour map, and the reference contour map represents the contour of the reference map; the test contour map calculation module is used for translating the test map and carrying out image frame difference operation on the translated test map and the test map to obtain a test contour map, and the test contour map represents the contour of the test map; and the matching module is used for determining the position coordinates of the nonzero pixel points for each nonzero pixel point in the test contour map and matching the pixel points near the position coordinates in the reference contour map with the nonzero pixel points to obtain a matching result.
The embodiment of the invention also discloses a storage medium, wherein computer instructions are stored on the storage medium, and the steps of the image difference detection method are executed when the computer instructions are executed.
The embodiment of the invention also discloses a terminal which comprises a memory and a processor, wherein the memory is stored with a computer instruction capable of running on the processor, and the processor executes the steps of the image difference detection method when running the computer instruction.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
in the technical scheme of the invention, the reference graph is converted, so that the shooting visual angle of the reference graph is consistent with that of the test graph, and the content difference between the reference graph and the test graph can be monitored in the subsequent steps; the method includes the steps of obtaining a reference profile graph representing the profile of the reference graph by performing image frame difference operation on the translated reference graph and the converted reference graph, obtaining a test profile graph representing the profile of the test graph by performing image frame difference operation on the translated test graph and the test graph, and effectively and accurately determining content difference between the reference graph and the test graph by matching non-zero pixel points in the reference profile graph and the test profile graph. In addition, the technical scheme of the invention can reduce the calculated amount by comparing the outline of the reference graph with the outline of the test graph; the method has practicability in the scene of automatic detection of the shooting effect.
Further, translating the converted reference map in a first direction and a second direction respectively, wherein the first direction is selected from up and down, and the second direction is selected from left and right; the translating the test chart comprises: translating the post-conversion test pattern in the first direction and the second direction, respectively. In the technical scheme of the invention, the first direction is different from the second direction, and the reference graph or the test graph is translated in different directions to be used for calculating the outline of the reference graph or the test graph, so that the accuracy of outline calculation can be ensured, and the accuracy of difference determination between the reference graph and the test graph is further ensured.
Further, with the position coordinates in the reference contour map as a center, determining a matching window with a preset size in the reference contour map; and searching pixel points matched with the nonzero pixel points in the matching window. According to the technical scheme, the pixel points matched with the nonzero pixel points can be searched in the matching window with the preset size, so that the influence of objective factors on the matching result can be eliminated, and the matching accuracy is ensured.
Drawings
FIG. 1 is a flow chart of an image difference detection method according to an embodiment of the present invention;
FIG. 2 is a flowchart of one embodiment of step S102 shown in FIG. 1;
FIG. 3 is a flowchart of one embodiment of step S105 shown in FIG. 1;
fig. 4 is a schematic structural diagram of an image difference detection apparatus according to an embodiment of the present invention.
Detailed Description
As described in the background art, in the field of image quality detection, there is no precise and effective algorithm for comparing the picture differences of different shooting angles.
In the technical scheme of the invention, the reference graph is converted, so that the shooting visual angle of the reference graph is consistent with that of the test graph, and the content difference between the reference graph and the test graph can be monitored in the subsequent steps; the method includes the steps of obtaining a reference profile graph representing the profile of the reference graph by performing image frame difference operation on the translated reference graph and the converted reference graph, obtaining a test profile graph representing the profile of the test graph by performing image frame difference operation on the translated test graph and the test graph, and effectively and accurately determining content difference between the reference graph and the test graph by matching non-zero pixel points in the reference profile graph and the test profile graph. In addition, the technical scheme of the invention can reduce the calculated amount by comparing the outline of the reference graph with the outline of the test graph; the method has practicability in the scene of automatic detection of the shooting effect.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Fig. 1 is a flowchart of an image difference detection method according to an embodiment of the present invention.
The image difference detection method may be used on a terminal device side, such as a computer device, that is, the terminal device executes the steps of the method shown in fig. 1, so as to detect the photographing effect of other photographing devices.
The image difference detection method may include the steps of:
step S101: acquiring a reference image and a test image, wherein the reference image and the test image are obtained by shooting the same object, and the reference image is a picture consistent with the shot object;
step S102: converting the reference image to enable the shooting visual angle of the converted reference image to be consistent with the shooting visual angle of the test image;
step S103: translating the converted reference map, and performing image frame difference operation on the translated reference map and the converted reference map to obtain a reference contour map, wherein the reference contour map represents the contour of the reference map;
step S104: translating the test chart, and carrying out image frame difference operation on the translated test chart and the test chart to obtain a test contour chart, wherein the test contour chart represents the contour of the test chart;
step S105: and for each nonzero pixel point in the test contour map, determining the position coordinate of the nonzero pixel point, and matching the pixel point near the position coordinate in the reference contour map with the nonzero pixel point to obtain a matching result.
It should be noted that the sequence numbers of the steps in this embodiment do not represent a limitation on the execution sequence of the steps.
The reference chart and the test chart in the embodiment of the invention can be obtained by shooting in advance. The reference image and the test image are obtained by shooting the same object, for example, both the reference image and the test image can be obtained by shooting a specified background area. Further, the reference map is a picture corresponding to the photographed object. The reference image is matched with the photographed object by manually confirming that the object in the reference image is not in error with the photographed object, for example, in color or shape.
In specific implementation, the test chart can be obtained by shooting by the shooting device to be tested, and the shooting performance of the shooting device to be tested can be evaluated by comparing the content difference between the reference chart and the test chart. More specifically, the viewing area of the test chart does not exceed the viewing area of the reference chart, that is, the size of the viewing area of the test chart is equal to or smaller than the size of the viewing area of the reference chart.
In the specific implementation of step S101, when the reference chart and the test chart are obtained, the reference chart and the test chart may be directly obtained from the photographing device to be tested, or may be retrieved from a database, in which the reference chart and the test chart are stored in advance. The number of the test chart may be one or more. By detecting the content difference between the reference graph and the plurality of test graphs, the photographing performance of the photographing device to be tested can be more accurately evaluated.
In order to more accurately match the reference chart and the test chart in the subsequent step, in a specific implementation of step S102, the reference chart may be converted so that the shooting angle of the converted reference chart is consistent with the shooting angle of the test chart. In other words, the fact that the shooting angle of view of the converted reference chart is consistent with the shooting angle of view of the test chart can mean that the scale of the converted reference chart is the same as that of the test chart, that is, the resolution is the same.
In the specific implementation of step S103, the contour of the reference map may be calculated by translating the reference map and performing an image frame difference operation on the translated reference map and the converted reference map. Specifically, the contour of the reference map may refer to a contour of an object in the reference map, and may be, for example, a pixel point having a pixel value significantly higher or lower than that of surrounding pixels.
After the frame difference algorithm operation, the contour of the reference image is specifically nonzero pixel points in the reference contour image.
It should be noted that, for the specific implementation process of the frame difference algorithm, reference may be made to the prior art, and details of the embodiment of the present invention are not described herein again.
Unlike step S103, in the implementation of step S104, the profile of the test chart can be calculated by translating the test chart and performing image frame difference operation on the translated test chart and the original test chart. Specifically, the contour of the test chart may refer to a contour of an object in the test chart, and may be, for example, a pixel point having a pixel value significantly higher or lower than a pixel value of a surrounding pixel.
After the frame difference algorithm operation, the outline of the test chart is specifically nonzero pixel points in the test outline chart.
Further, in the implementation of step S105, the similarity matching may be performed on the contour of the test contour map in the reference contour. That is to say, for each nonzero pixel point (that is, a pixel point with a nonzero pixel value) in the test profile, the position coordinate of the nonzero pixel point is determined, and the pixel point near the position coordinate in the reference profile is matched with the nonzero pixel point. And for zero pixel points (namely pixel points with the pixel values being zero) in the test contour map, matching is not carried out, so that the calculated amount is reduced.
Specifically, the matching result may include whether a pixel point near the position coordinate in the reference contour map is matched with the nonzero pixel point, if so, it indicates that the two pixel points are consistent, otherwise, it indicates that the two pixel points are different.
Therefore, after all the nonzero pixel points in the test contour map are traversed, the matching results aiming at all the nonzero pixel points can be obtained, and whether the reference map and the test map have differences or not and the difference size when the differences exist can be determined according to the matching results.
In the embodiment of the invention, the reference graph is converted, so that the shooting visual angle of the reference graph is consistent with that of the test graph, and the content difference between the reference graph and the test graph can be monitored in the subsequent steps; the method includes the steps of obtaining a reference profile graph representing the profile of the reference graph by performing image frame difference operation on the translated reference graph and the converted reference graph, obtaining a test profile graph representing the profile of the test graph by performing image frame difference operation on the translated test graph and the test graph, and effectively and accurately determining content difference between the reference graph and the test graph by matching non-zero pixel points in the reference profile graph and the test profile graph. In addition, the embodiment of the invention can reduce the calculation amount by comparing the outline of the reference graph with the outline of the test graph; the method has practicability in the scene of automatic detection of the shooting effect.
For example, the reference map and the test map each include a gray area of 200 pixels × 100 pixels, and if the calculation is performed directly using the reference map and the test map, it is necessary to perform the pixel subtraction operation 20000 times. However, the gray area is a pixel point with a pixel value of 0 in both the reference profile and the test profile, so that the area does not need to be calculated when the matching result is determined, and the calculation amount is greatly reduced.
In a non-limiting embodiment of the present invention, referring to fig. 2, step S102 shown in fig. 1 may include the following steps:
step S201: carrying out characteristic point detection and matching on the reference graph and the test graph to obtain matched characteristic point pairs;
step S202: calculating a transformation matrix transformed from the characteristic points in the reference graph to the matched characteristic points in the test graph by using the matched characteristic point pairs;
step S203: and multiplying the transformation matrix and the reference image to obtain a converted reference image.
In this embodiment, in order to implement conversion of the reference graph, feature points may be matched and detected by the reference graph and the test graph, where the matched feature points have similar pixel features, for example, the feature points may be intersections of edges, and pixel points whose pixel values are significantly higher or lower than those of surrounding pixels.
And calculating a transformation matrix transformed from the characteristic points in the reference diagram to the matched characteristic points in the test diagram by using the position coordinates of the matched characteristic points in the reference diagram and the position coordinates in the test diagram, wherein the transformation matrix is a single mapping matrix.
In a specific implementation, the SURF algorithm may be used to extract feature points in the reference map and the side view. Further, in order to obtain a better feature point matching effect, feature points can be further extracted by adopting a Lowe's algorithm on the basis of an operation result of the SURF algorithm. And extracting a plurality of effective matched characteristic point pairs by using a K-Nearest Neighbor (KNN) algorithm and a Wechsler algorithm.
Specifically, the transformation matrix may be calculated using a findHomography function. The reference map is transformed using the warPerspectral function.
In this embodiment, the converted reference chart and the test chart are located at the same coordinate, the shooting angles of the two are consistent, and the coordinates of the angles of view are consistent.
In one non-limiting embodiment of the present invention, the method shown in FIG. 1 may further comprise the steps of: calculating the number of the unmatched pixel points; and outputting the number and the matching result.
In this embodiment, the number of unmatched pixel points may represent the difference between the test chart and the reference chart. By calculating the number of unmatched pixel points and outputting the unmatched pixel points, a user can know the difference between the test chart and the reference chart and can also know the difference.
In one non-limiting embodiment of the present invention, step S103 shown in fig. 1 may include the following steps: the translated reference map is translated in a first direction and a second direction, respectively, the first direction being selected from up and down and the second direction being selected from left and right.
Similarly, step S104 shown in fig. 1 may include the following steps: translating the post-conversion test pattern in the first direction and the second direction, respectively.
In the embodiment of the invention, the first direction is different from the second direction, and the reference graph or the test graph is translated in different directions to be used for calculating the outline of the reference graph or the test graph, so that the accuracy of outline calculation can be ensured, and the accuracy of difference determination between the reference graph and the test graph is further ensured.
Specifically, the reference chart or the test chart may be translated by 1 or 2 pixel points in the first direction and the second direction, respectively. Because the visual difference can not be generated for the human eyes of the user after one or 2 pixel points are translated, and the profiles in a certain direction can be prevented from being judged by mistake by selecting two directions for translation, the calculated profile of the reference graph or the profile of the test graph is more accurate.
Further, the distance for translating the converted reference image and translating the test image is two pixels. Under the condition of translating the two pixel points, a better outline can be obtained.
In specific implementation, after the frame difference operation is performed on the translated reference image and the converted reference image, or after the frame difference operation is performed on the translated test image and the original test image, the non-contour region is 0 after subtraction operation and threshold processing, and the value of the pixel point in the contour region is not 0, so that the contour region in the reference image or the test image is well reserved.
In a non-limiting embodiment of the present invention, referring to fig. 3, step S105 shown in fig. 1 may include the following steps:
step S301: determining a matching window with a preset size in the reference contour map by taking the position coordinate in the reference contour map as a center;
step S302: and searching pixel points matched with the nonzero pixel points in the matching window.
In the embodiment of the invention, the pixel points matched with the nonzero pixel points can be searched in the matching window with the preset size, so that the influence of objective factors on the matching result can be eliminated, and the matching accuracy is ensured.
In a specific implementation, the preset size may be 3 × 3 pixels, that is, the size of the matching window is 3 × 3. For each non-zero pixel point in the test contour map, whether a pixel point matched with the non-zero pixel point exists or not can be searched in a preset window of a corresponding position in the reference contour map. If the pixel point exists in the test contour map, the fact that the pixel point in the test contour map has a matched pixel point in the reference contour map is represented, and if the pixel point does not exist in the test contour map, the fact that the pixel point in the test contour map is different from the pixel point in the reference map is represented.
Further, step S302 may specifically include the following steps: and if the difference value between the pixel value of the pixel point on each channel and the pixel value of the nonzero pixel point on the corresponding channel is smaller than a preset threshold value, determining that the pixel point is matched with the nonzero pixel point.
In a specific implementation, the reference graph and the test graph may be RGB images, and the pixel values of the pixel points in each channel may refer to pixel values in a red channel, a green channel, and a blue channel.
For example, for a non-zero pixel f (x, y) in the test contour map, (Δ b, Δ g, Δ r) | f (x, y) -j (x0, y0) | is calculated, where j (x0, y0) represents the pixel value of a pixel in the matching window in the reference contour map, and if Δ b <30, Δ g <30, and Δ r <30 exist, this means that the color value of the non-zero pixel f (x, y) is close to the color value of a certain pixel in the matching window, and it is considered that there is no contour here, it may be labeled as g (x, y) | (0,0,0), and the g (x, y) image is a schematic diagram of the matching result obtained by calculation. Otherwise, the non-zero pixel point f (x, y) and all pixel point colors in the matching window jump, and the contour is considered to exist here, and the mark is g (x, y) ═ Δ b, Δ g, Δ r.
It is to be understood that the reference graph and the test graph may also be images in other formats, for example, YUV images, and the pixel values of the pixel points on the channels may refer to pixel values on a Y channel, a U channel, and a V channel, which is not limited in this embodiment of the present invention.
Referring to fig. 4, an embodiment of the present invention further discloses an image difference detecting device 40, which may include:
an image obtaining module 401, configured to obtain a reference chart and a test chart, where the reference chart and the test chart are obtained by shooting the same object, and the reference chart is a picture consistent with the shot object;
a converting module 402, configured to convert the reference chart so that a shooting angle of the converted reference chart is consistent with a shooting angle of the test chart;
a reference contour map calculation module 403, configured to translate the converted reference map, and perform image frame difference operation on the translated reference map and the converted reference map to obtain a reference contour map, where the reference contour map represents a contour of the reference map;
a test profile calculation module 404, configured to translate the test chart, and perform image frame difference operation on the translated test chart and the test chart to obtain a test profile, where the test profile represents a profile of the test chart;
a matching module 405, configured to determine, for each nonzero pixel point in the test contour map, a position coordinate of the nonzero pixel point, and match a pixel point near the position coordinate in the reference contour map with the nonzero pixel point to obtain a matching result.
According to the embodiment of the invention, the content difference between the reference graph and the test graph can be effectively and accurately determined by matching the nonzero pixel points in the reference graph and the test graph. In addition, the technical scheme of the invention can reduce the calculated amount by comparing the outline of the reference graph with the outline of the test graph; the method has practicability in the scene of automatic detection of the shooting effect.
For more details of the operation principle and the operation mode of the image difference detection apparatus 40, reference may be made to the description in fig. 1 to 3, which is not repeated here.
The embodiment of the invention also discloses a storage medium, wherein computer instructions are stored on the storage medium, and when the computer instructions are operated, the steps of the method shown in the figure 1, the figure 2 or the figure 3 can be executed. The storage medium may include ROM, RAM, magnetic or optical disks, etc. The storage medium may further include a non-volatile memory (non-volatile) or a non-transitory memory (non-transient), and the like.
The embodiment of the invention also discloses a terminal which can comprise a memory and a processor, wherein the memory is stored with computer instructions capable of running on the processor. The processor, when executing the computer instructions, may perform the steps of the methods shown in fig. 1, fig. 2, or fig. 3. The terminal includes, but is not limited to, a mobile phone, a computer, a tablet computer and other terminal devices.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (10)
1. An image difference detection method, comprising:
acquiring a reference image and a test image, wherein the reference image and the test image are obtained by shooting the same object, and the reference image is a picture consistent with the shot object;
converting the reference image to enable the shooting visual angle of the converted reference image to be consistent with the shooting visual angle of the test image;
translating the converted reference map, and performing image frame difference operation on the translated reference map and the converted reference map to obtain a reference contour map, wherein the reference contour map represents the contour of the reference map;
translating the test chart, and carrying out image frame difference operation on the translated test chart and the test chart to obtain a test contour chart, wherein the test contour chart represents the contour of the test chart;
and for each nonzero pixel point in the test contour map, determining the position coordinate of the nonzero pixel point, and matching the pixel point near the position coordinate in the reference contour map with the nonzero pixel point to obtain a matching result.
2. The image difference detection method according to claim 1, wherein the converting the reference map includes:
carrying out characteristic point detection and matching on the reference graph and the test graph to obtain matched characteristic point pairs;
calculating a transformation matrix transformed from the characteristic points in the reference graph to the matched characteristic points in the test graph by using the matched characteristic point pairs;
and multiplying the transformation matrix and the reference image to obtain a converted reference image.
3. The method of claim 1, wherein the matching result includes pixels of the reference profile that do not match with each non-zero pixel of the test profile, and the method further comprises:
calculating the number of the unmatched pixel points;
and outputting the number and the matching result.
4. The image difference detection method according to claim 1, wherein the translating the converted reference map comprises:
translating the converted reference map in a first direction and a second direction respectively, wherein the first direction is selected from up and down, and the second direction is selected from left and right;
the translating the test chart comprises:
translating the post-conversion test pattern in the first direction and the second direction, respectively.
5. The image difference detection method according to claim 1, wherein the matching of the pixel points near the position coordinates in the reference profile to the non-zero pixel points comprises:
determining a matching window with a preset size in the reference contour map by taking the position coordinate in the reference contour map as a center;
and searching pixel points matched with the nonzero pixel points in the matching window.
6. The image difference detection method according to claim 5, wherein the searching for the pixel point matching the non-zero pixel point in the matching window comprises:
and if the difference value between the pixel value of the pixel point on each channel and the pixel value of the nonzero pixel point on the corresponding channel is smaller than a preset threshold value, determining that the pixel point is matched with the nonzero pixel point.
7. The image difference detection method according to claim 1, wherein a distance for translating the converted reference chart and translating the test chart is two pixels.
8. An image difference detection apparatus, comprising:
the image acquisition module is used for acquiring a reference image and a test image, wherein the reference image and the test image are obtained by shooting the same object, and the reference image is a picture consistent with the shot object;
the conversion module is used for converting the reference image so as to enable the shooting visual angle of the converted reference image to be consistent with the shooting visual angle of the test image;
the reference contour map calculation module is used for translating the converted reference map and carrying out image frame difference operation on the translated reference map and the converted reference map to obtain a reference contour map, and the reference contour map represents the contour of the reference map;
the test contour map calculation module is used for translating the test map and carrying out image frame difference operation on the translated test map and the test map to obtain a test contour map, and the test contour map represents the contour of the test map;
and the matching module is used for determining the position coordinates of the nonzero pixel points for each nonzero pixel point in the test contour map and matching the pixel points near the position coordinates in the reference contour map with the nonzero pixel points to obtain a matching result.
9. A storage medium having stored thereon computer instructions, wherein the computer instructions are operable to perform the steps of the image difference detection method according to any one of claims 1 to 7.
10. A terminal comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, wherein the processor, when executing the computer instructions, performs the steps of the image difference detection method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910876627.6A CN110598795A (en) | 2019-09-17 | 2019-09-17 | Image difference detection method and device, storage medium and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910876627.6A CN110598795A (en) | 2019-09-17 | 2019-09-17 | Image difference detection method and device, storage medium and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110598795A true CN110598795A (en) | 2019-12-20 |
Family
ID=68860197
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910876627.6A Pending CN110598795A (en) | 2019-09-17 | 2019-09-17 | Image difference detection method and device, storage medium and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110598795A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111507411A (en) * | 2020-04-20 | 2020-08-07 | 北京英迈琪科技有限公司 | Image comparison method and system |
CN112485709A (en) * | 2020-11-09 | 2021-03-12 | 泓准达科技(上海)有限公司 | Method, device, medium and electronic equipment for detecting internal circuit abnormality |
CN112734837A (en) * | 2020-12-29 | 2021-04-30 | 上海商汤临港智能科技有限公司 | Image matching method and device, electronic equipment and vehicle |
CN113450299A (en) * | 2020-03-09 | 2021-09-28 | 深圳中科飞测科技股份有限公司 | Image matching method, computer device and readable storage medium |
CN114286088A (en) * | 2021-12-21 | 2022-04-05 | 长沙景嘉微电子股份有限公司 | Video screen splash detection method, device and storage medium applied to graphic processor |
CN115497615A (en) * | 2022-10-24 | 2022-12-20 | 王征 | Remote medical treatment method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8892594B1 (en) * | 2010-06-28 | 2014-11-18 | Open Invention Network, Llc | System and method for search with the aid of images associated with product categories |
US20150062165A1 (en) * | 2013-08-29 | 2015-03-05 | Seiko Epson Corporation | Image processing device and head mounted display apparatus including the same |
CN109034185A (en) * | 2018-06-08 | 2018-12-18 | 汪俊 | A kind of street view image contrast difference method and device |
CN109308716A (en) * | 2018-09-20 | 2019-02-05 | 珠海市君天电子科技有限公司 | A kind of image matching method, device, electronic equipment and storage medium |
-
2019
- 2019-09-17 CN CN201910876627.6A patent/CN110598795A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8892594B1 (en) * | 2010-06-28 | 2014-11-18 | Open Invention Network, Llc | System and method for search with the aid of images associated with product categories |
US20150062165A1 (en) * | 2013-08-29 | 2015-03-05 | Seiko Epson Corporation | Image processing device and head mounted display apparatus including the same |
CN109034185A (en) * | 2018-06-08 | 2018-12-18 | 汪俊 | A kind of street view image contrast difference method and device |
CN109308716A (en) * | 2018-09-20 | 2019-02-05 | 珠海市君天电子科技有限公司 | A kind of image matching method, device, electronic equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
HARDVB: "利用图像抖动来获取物体的轮廓的新型算法", 《HTTPS:https://BLOG.CSDN.NET/HARDVB/ARTICLE/DETAILS/469372 》 * |
何汉武,吴悦明,陈和恩编著: "《增强现实交互方法与实现》", 31 December 2012 * |
那彦,焦李成主编: "《基于多分辨分析理论的图像融合方法》", 31 May 2007 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113450299A (en) * | 2020-03-09 | 2021-09-28 | 深圳中科飞测科技股份有限公司 | Image matching method, computer device and readable storage medium |
CN111507411A (en) * | 2020-04-20 | 2020-08-07 | 北京英迈琪科技有限公司 | Image comparison method and system |
CN112485709A (en) * | 2020-11-09 | 2021-03-12 | 泓准达科技(上海)有限公司 | Method, device, medium and electronic equipment for detecting internal circuit abnormality |
CN112485709B (en) * | 2020-11-09 | 2024-03-29 | 泓准达科技(上海)有限公司 | Method, device, medium and electronic equipment for detecting abnormality of internal circuit |
CN112734837A (en) * | 2020-12-29 | 2021-04-30 | 上海商汤临港智能科技有限公司 | Image matching method and device, electronic equipment and vehicle |
WO2022142206A1 (en) * | 2020-12-29 | 2022-07-07 | 上海商汤临港智能科技有限公司 | Image matching method and apparatus, electronic device, and vehicle |
CN112734837B (en) * | 2020-12-29 | 2024-03-22 | 上海商汤临港智能科技有限公司 | Image matching method and device, electronic equipment and vehicle |
CN114286088A (en) * | 2021-12-21 | 2022-04-05 | 长沙景嘉微电子股份有限公司 | Video screen splash detection method, device and storage medium applied to graphic processor |
CN115497615A (en) * | 2022-10-24 | 2022-12-20 | 王征 | Remote medical treatment method and system |
CN115497615B (en) * | 2022-10-24 | 2023-09-01 | 北京亿家老小科技有限公司 | Remote medical method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110598795A (en) | Image difference detection method and device, storage medium and terminal | |
CN111028213B (en) | Image defect detection method, device, electronic equipment and storage medium | |
US10699476B2 (en) | Generating a merged, fused three-dimensional point cloud based on captured images of a scene | |
CN110909750B (en) | Image difference detection method and device, storage medium and terminal | |
CN110546651B (en) | Method, system and computer readable medium for identifying objects | |
CN108229475B (en) | Vehicle tracking method, system, computer device and readable storage medium | |
CN108470356B (en) | Target object rapid ranging method based on binocular vision | |
US20190220685A1 (en) | Image processing apparatus that identifies object and method therefor | |
CN109993086B (en) | Face detection method, device and system and terminal equipment | |
US20230206594A1 (en) | System and method for correspondence map determination | |
US20100033584A1 (en) | Image processing device, storage medium storing image processing program, and image pickup apparatus | |
CN105009170A (en) | Object identification device, method, and storage medium | |
US9767383B2 (en) | Method and apparatus for detecting incorrect associations between keypoints of a first image and keypoints of a second image | |
CN109376641B (en) | Moving vehicle detection method based on unmanned aerial vehicle aerial video | |
KR102195826B1 (en) | Keypoint identification | |
KR100591608B1 (en) | Method for searching matching point in image matching | |
CN111681271B (en) | Multichannel multispectral camera registration method, system and medium | |
KR101784620B1 (en) | Method and device for measuring confidence of depth by stereo matching | |
US9098746B2 (en) | Building texture extracting apparatus and method thereof | |
CN111630569B (en) | Binocular matching method, visual imaging device and device with storage function | |
CN115527160A (en) | Defect monitoring method and device for well lid in road | |
JP2013182416A (en) | Feature amount extraction device, feature amount extraction method, and feature amount extraction program | |
CN108280815B (en) | Geometric correction method for monitoring scene structure | |
KR101491334B1 (en) | Apparatus and method for detecting color chart in image | |
WO2024047847A1 (en) | Detection device, detection method, and detection program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191220 |
|
RJ01 | Rejection of invention patent application after publication |