CN113506297B - Printing data identification method based on big data processing - Google Patents
Printing data identification method based on big data processing Download PDFInfo
- Publication number
- CN113506297B CN113506297B CN202111063256.3A CN202111063256A CN113506297B CN 113506297 B CN113506297 B CN 113506297B CN 202111063256 A CN202111063256 A CN 202111063256A CN 113506297 B CN113506297 B CN 113506297B
- Authority
- CN
- China
- Prior art keywords
- connected domain
- image
- label
- printing
- domain
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000007547 defect Effects 0.000 claims abstract description 24
- 238000003708 edge detection Methods 0.000 claims abstract description 11
- 230000011218 segmentation Effects 0.000 claims abstract description 8
- 230000002159 abnormal effect Effects 0.000 claims abstract description 6
- 238000005516 engineering process Methods 0.000 claims abstract description 5
- 238000005286 illumination Methods 0.000 abstract description 7
- 238000001514 detection method Methods 0.000 abstract description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000004458 analytical method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 3
- 101100322581 Caenorhabditis elegans add-1 gene Proteins 0.000 description 1
- 101150055297 SET1 gene Proteins 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30144—Printing quality
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a printing data identification method based on big data processing, which comprises the following steps: segmenting the acquired RGB image by using a semantic segmentation technology to obtain a presswork image; step two: processing the standard image and the printing image to obtain respective image descriptions; step three: and comparing the image descriptions of the standard image and the printing image to judge the abnormal printing condition. Compared with the prior art, the invention has the beneficial effects that: according to the method, the edges in the pattern are extracted through edge detection, and then the defect detection is carried out by utilizing the change difference of the description of each connected domain, so that the interference of illumination is avoided, the reliability of the result is improved, and the abnormal condition is judged by using the description of the connected domain of the image content instead of the response value of the corresponding pixel.
Description
Technical Field
The invention relates to the field of big data processing, in particular to a printing data identification method based on big data processing.
Background
The existing method for detecting the defects of the printed matter is not only a subjective visual inspection method, but also is often to compare and analyze the data of the printed matter to be detected with standard template data, for example, a beam of light is projected on the printed matter in a colorimetric detection method, a tristimulus value of color is obtained through an instrument and converted into a numerical value which can be compared, and then the numerical value is compared with the value of a sample, and the place where the abnormality occurs is the position where the defect exists; and the shot image is compared with the standard image based on the difference of the image, but the methods are easily interfered by factors such as illumination and the like, and finally, the result is inaccurate. Because the standard template is only an electronic version file and is not interfered by the environment in many times, the image of the printed product is really acquired by a camera, and the intensity, the direction and the like of a light source in a real scene are random, the printed product is easily influenced by illumination when being directly compared, namely, the place where the difference is difficult to determine is a real printing defect or the difference of illumination.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a printing data identification method based on big data processing.
In order to achieve the purpose, the invention adopts the following technical scheme:
a printing data identification method based on big data processing comprises the following steps: segmenting the acquired RGB image by using a semantic segmentation technology to obtain a presswork image; step two: processing the standard image and the printing image to obtain respective image descriptions; step three: and comparing the image descriptions of the standard image and the printing image to judge the abnormal printing condition.
Further, the second step is specifically: carrying out edge detection on the printed matter image to obtain a corresponding edge image; extracting a closed connected domain in the edge image; obtaining a description thereof by connected domain analysis; the connected component description is combined to obtain the description of the printed product image.
Further, performing edge detection on the printed image, and obtaining a corresponding edge image specifically includes: inputting a printing image, carrying out gray processing on the image, and carrying out edge detection on the image by using a Canny operator to obtain a gradient edge, namely the edge of the pattern in the printing area.
Further, extracting a closed connected domain in the edge image specifically includes: analyzing the connected domain of the edge of the pattern in the printing area by using a seed filling method to obtain the connected domain with different labelsAnd obtaining the value of the maximum tag numberI.e. the total number.
Further, the descriptions thereof obtained by connected domain analysis are specifically: setting initial parameters, printing each connected domain of image(Co-occurrence ofNumber) of pixels, N =0, and L = 0. Limit coordinates are:;
because the pixels of different images are different in size, the area parameter of the connected domain is represented by the ratio of the number of the pixels of the connected domain to the total number of the pixels of the whole image, that is, the area parameter S of the connected domain:
traversing pixel points of the image line by line to obtain: the pixel points of each row in the image have corresponding connected domain labels, and the form is as follows:
(ii) a Wherein 0 is background pixel, namely non-connected domain pixel, the number outside 0 is label number of corresponding connected domain, process the label value, obtain the hierarchical information of connected domain under this row, because a connected domain is a closed area, so pass from left to right on a row of pixel of the picture, its label number will appear at least twice, first for starting to enter this connected domain, second for leaving this connected domain, there is connected domain of nested structure, must be that the big connected domain includes the small connected domain, so the nested number of layers of connected domain will not change once determining, if it is determined that the number of layers of connected domain will not change onceIndicates that the nesting layer number of the corresponding connected domain is not determined whenIn the process, the value of L is not required to be changed, a temporary variable C =0 is set, traversal is performed from left to right, the first non-0 number is recorded, the corresponding number in the sequence is 1, and the corresponding C is set1, this is due to the number of nested layers of connected domains labeled 1Therefore, the value of the nesting layer number L of the corresponding connected component is updated to C, which indicates that the maximum nesting layer number of the connected component is 1, and the recorded non-0 tag sequence is(ii) a A second non-0 digit, the corresponding digit in the sequence being 3, the digit being recorded if no digit 3 is present in the recorded sequence of labels, the resulting sequence of labels beingAt this time orderIndicating entry into a further nested connected region, the largest nesting level of connected regions due to tag number 3UpdateThe maximum nesting layer number of the connected domain with the label number of 3 is 2; the third non-0 digit is 2, and the digit 2 is not recorded in the recorded non-0 label sequence, and the non-0 digit label sequence is the label sequence at this momentNumber 2 of nested layers of connected domains due to tag number 2Let us orderThe maximum nesting layer number of the connected domain with the label number of 2 is 3; continuing the traversal, the fourth traversal has a non-0 digit of 2 due to the previously recorded sequence of digitsThere is already 2 in (1), which means that the traversal of the connected component labeled 2 has ended and is no longer recorded into the label sequence. Subtracting 1 from C, wherein C =2 at this time, namely the traversed pixel points are located in a connected domain with a nesting level of 2 at this time, and so on, when a new connected domain is encountered, namely a label number which does not exist in the recorded non-0 digital label sequence, adding 1 to C to indicate that the connected domain enters a nesting area of a deeper layer; every time a connected domain is left, C is reduced by 1, and the nested region of the previous layer is returned; in addition, when adding 1 to C each time, it needs to judge whether the nesting level L corresponding to the label number is 0 or not, until the pixels in the row are traversed, and the number of pixels corresponding to the label is increased each time one pixel point with the label is traversed(ii) a Comparing the coordinates of the labeled pixelsAnd the existingIf, ifThen, thenOtherwiseKeeping the same; if it isThen, thenOtherwiseKeeping the same; to pairThe same process is carried out; after traversing, obtaining the coordinates of the central pointWherein(ii) a To obtainAnd the respective number of nesting layers L; calculating the integral characteristic value of each connected domainAnd expressing the distance from the coordinates of the center point of the connected domain to the origin and the area of the connected domain:
Further, the description of the print image obtained by combining the descriptions of the connected domain is specifically as follows: the description of the whole image is represented as
Similarly, the above operations are repeated for the standard image to obtain the image description data of the standard image
Further, the third step is specifically: traversing the connected domain of the standard image and the printing image according to the layer numberGrouping and respectively calculating the number corresponding to each hierarchy; comparing the corresponding number of each level of the standard image and the printing image, wherein the number is consistent to be a normal condition, and the levels with inconsistent number need to further detect the connected domain in the level: setting corresponding connected domain flags in standard imagesIntegral feature differenceCorresponding connected domain marks in printed imagesSearching the corresponding relation among all connected domains of the hierarchy: searching the integral characteristic value of a certain connected domain of the printing image and each connected domain in the standard imageThe difference between:
the difference being minimal, i.e.Then, the two related connected domains are the label numbers corresponding to the printing imagesCorresponding connected domain ofTag number corresponding to standard imageThe corresponding connected domain of (a);
if the standard image connected domain markAt this time, the connected component in the standard image is stored as the connected component without the corresponding connected component;
If the standard image connected domain mark,Indicating that the connected component in the standard image currently has a corresponding connected component in the printed imageComparing the connected component of the standard imageAnd corresponding connected domainCorresponding difference valueThe size of (2):
If it isObtaining corresponding connected domains in the printed imageThe area difference with the standard image connected domain; obtaining corresponding connected domainsThe area difference with the standard image connected domain, if corresponding to the connected domainIf the corresponding area difference is smaller, updatingA value of (d); otherwise, it is not updatedA value of (d);
if it isThen not updateA value of (d); after traversing is finished, corresponding connected domain marks of connected domains in printed imagesThere are two cases:;
indicates the connected domainThe standard image has no corresponding connected domain, namely a defect area, the defect exists at the position of the pattern inner and pattern outer blank areas, the former has more nesting layers and does not influence the whole content of the printed product, so the influence degree of the defect is smaller, and the latter has larger degree, so the whole influence condition of the defect in the area is largerComprises the following steps:
comparing the area difference between the corresponding connected domains in the standard image, wherein the error is normal within 3 percent of the total area of the standard image due to the statistical error, and the condition that the pattern is lack of printing or is over printing is considered to exist if the error exceeds 3 percent, so that the overall influence condition of the defects of the regions is as followsExpressed as follows:
compared with the prior art, the invention has the beneficial effects that: according to the method, the edges in the pattern are extracted through edge detection, and then the defect detection is carried out by utilizing the change difference of the description of each connected domain, so that the interference of illumination is avoided, the reliability of the result is improved, and the abnormal condition is judged by using the description of the connected domain of the image content instead of the response value of the corresponding pixel.
Drawings
FIG. 1 is a system flow diagram;
FIG. 2 is a diagram of a connected domain tag format;
FIG. 3 is a nesting diagram;
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
See fig. 1. The invention mainly aims to detect the defects of the printed matter; mainly aims at printed matters such as posters with less connected domains.
The method comprises the following steps: extracting a printing image in the acquired RGB image by using a semantic segmentation technology; first, DNN is used to identify the printed matter in the captured image, i.e., the captured image has complex conditions such as background, and the printed matter is to be judged. Processing an image to be detected by a semantic segmentation technology: inputting an image of printing paper acquired by a camera, and performing semantic segmentation on the image by using a DNN (digital hierarchy network), wherein the network structure is an Encoder-Decoder structure, and a data set is various types of printing paper images; labels fall into two categories, printed products and backgrounds. The method is pixel-level classification, that is, all pixels in an image need to be labeled with corresponding labels. A pixel belonging to the printing paper, whose value is denoted by 1, a pixel belonging to the background, whose value is denoted by 0; the loss function used by the network is a cross entropy loss function. And after the connected domain of the printed matter is obtained, the following operations are carried out: and taking the result obtained by semantic segmentation as a mask, and extracting a corresponding presswork image from the RGB image. And rotating the image according to the included angle between the long axis and the short axis of the mask. And obtaining the corrected printed product image. The partial mask segmentation, rotation, is conventional.
And the first step is finished, and the printed matter image can be separated from the acquired RGB image.
Step two: the standard image and the printing image are subjected to connected domain analysis to obtain respective image descriptions, and the change of the illumination is easy to bring the change of pixel values in the image, so that the invention needs not to use gray values to obtain the image descriptions in order to avoid the influence of the illumination. The process for obtaining the printed matter description comprises the following steps: and carrying out edge detection on the printed matter image to obtain a corresponding edge image. And extracting a closed connected domain in the edge image. Its description is obtained by connected domain analysis. The connected component description is combined to obtain the description of the printed product image.
Carrying out edge detection on the printed matter image to obtain a corresponding edge image: inputting a printing image, carrying out edge detection on the image by using a Canny operator after carrying out graying processing on the image to obtain a gradient edge, namely the edge of the pattern in the printing area;
extracting a closed connected domain in the edge image: performing connected domain analysis on the result obtained in the last step by using Seed Filling method to obtain connected domains with different labels (labels)And obtaining the value of the maximum tag number(i.e., total number), connected domain tag format is shown in FIG. 2. The principle is as follows: https:// blog.csdn.net/liangchunjiang/article/details/79431339.
Its description is obtained by connected domain analysis: setting initial parameters, first, printing each connected domain of the image(Co-occurrence ofNumber) of pixels, N =0, and L = 0. Limit coordinates are:(ii) a Because the pixel sizes of different images are different, the area parameter of the connected domain uses the pixel number of the connected domain and the total image of the whole imageThe ratio of the number of elements is expressed, namely the connected domain area parameter S:
wherein N is the number of pixels of the connected domain,is the number of pixels of the whole image.
Traversing pixel points of the image line by line to obtain: the pixel points of each row in the image have corresponding connected domain labels, and the form is as follows:
where 0 is the background (non-connected domain) pixel and the numbers outside of 0 are the tag numbers corresponding to the connected domains. And processing the label value to obtain the hierarchy information of the downlink connected domain. Since a connected component is a closed region, its label number appears at least twice from left to right across a row of pixels in the image, the first time to begin entering the connected component and the second time to leave the connected component. Furthermore, the presence of nested-structured connected domains, necessarily the large connected domain containing the small connected domain, does not change the number of nested layers of connected domains once determined, if anyIndicates that the nesting layer number of the corresponding connected domain is not determined whenThere is no need to change the value of L. Firstly, setting a temporary variable C =0, traversing from left to right, recording a first non-0 number, wherein the corresponding number in the sequence is 1, and the corresponding C is set to 1, at this time, because the nested layer number of the connected domain with the label number of 1 isSo corresponding connected domainThe value of the nesting layer number L is updated to C, which indicates that the maximum nesting layer number of the connected domain is 1, and the recorded non-0 label sequence is(ii) a A second non-0 digit, the corresponding digit in the sequence being 3, the digit being recorded if no digit 3 is present in the recorded sequence of labels, the resulting sequence of labels beingAt this time orderIndicating entry into a further nested connected region, the largest nesting level of connected regions due to tag number 3UpdateThe maximum nesting layer number of the connected domain with the label number of 3 is 2; similarly, the third non-0 digit is 2, and the digit 2 is not recorded in the recorded non-0 tag sequence, and the non-0 digit tag sequence is the tag sequence at this timeNumber 2 of nested layers of connected domains due to tag number 2Let us orderThe maximum nesting layer number of the connected domain with the label number of 2 is 3; continuing the traversal, the fourth traversal has a non-0 digit of 2 due to the previously recorded sequence of digitsHas 2 already existed in the sequence, and means that the traversal of the connected component labeled 2 has ended and is no longer recorded in the sequence of labels. And subtracting 1 from C, wherein C =2, namely the traversed pixel points are located in the connected domain with the nesting level of 2. By analogy, when a new connected domain is encountered, namely a label number which does not exist in the recorded non-0 digital label sequence, adding 1 to C to represent that the connected domain enters a nesting area of a deeper layer; every time a connected domain is left, C is reduced by 1, and the nested region of the previous layer is returned; in addition, each time the add-1 operation is performed on C, it is necessary to determine whether the nesting level L corresponding to the tag number is 0 until the row of pixels is traversed. The whole flow is shown in fig. 3 below.
Each time a pixel point with a label is traversed, the number of pixels corresponding to the label is increased(ii) a Comparing the pixel point coordinate with label with the existing oneA comparison is made, such as: the coordinates of the pixel points of the label traversed by a certain point areIf, ifThen, thenOtherwiseKeeping the same; if it isThen, thenOtherwiseKeeping the same; to pairThe same process is carried out; after traversing, obtaining the coordinates of the central pointWherein(ii) a To obtainAnd the respective number of nesting layers L; calculating the integral characteristic value of each connected domainAnd expressing the distance from the coordinates of the center point of the connected domain to the origin and the area of the connected domain:
Combining the connected domain descriptions to obtain the description of the printed product image: the description of the entire image appears as
Similarly, the above operations are repeated for the standard image to obtain the image description data of the standard image
WhereinAs a link in a standard imageThe number of the through domains is greater than the number of the through domains,the number of connected domains;
by this time, the second step is completed,
step three: comparing the image descriptions of the standard image and the printing image, and judging abnormal conditions; traversing the connected domain of the standard image and the printing image according to the layer numberGrouping and respectively calculating the number corresponding to each hierarchy;
comparing the corresponding number of each level of the standard image and the printing image, wherein the number is consistent to be a normal condition, and the levels with inconsistent number need to further detect the connected domain in the level: setting corresponding connected domain flags in standard imagesIntegral feature differenceCorresponding connected domain marks in printed imagesSearching the corresponding relation among all connected domains of the hierarchy: searching the integral characteristic value of a certain connected domain of the printing image and each connected domain in the standard imageThe difference between:
in principle, the difference is minimal, i.e.Then, the two related connected domains are the printing images (corresponding to the label numbers)) With standard image (corresponding tag number)) Corresponding connected domain of(ii) a If the standard image connected domain markThat is, the connected component in the standard image at this time has no corresponding connected componentUpdateThe value of (c):
if the standard image connected domain mark,Indicating that the connected component in the standard image currently has a corresponding connected component in the printed imageComparing the connected component of the standard imageAnd corresponding connected domainCorresponding difference valueThe size of (2):
If it isObtaining corresponding connected domains in the printed imageThe area difference with the standard image connected domain; obtaining corresponding connected domainsThe area difference of the connected domain with the standard image; if corresponding connected domainIf the corresponding area difference is smaller, updatingA value of (d); otherwise, it is not updatedA value of (d);
After traversing is finished, corresponding connected domain marks of connected domains in printed imagesThere are two cases:;the connected domain does not exist in the corresponding connected domain in the standard image, namely a defect area, the position of the defect is a blank area inside the pattern and outside the pattern, the former has more nesting layers and does not influence the whole content of the printed product, so the influence degree of the defect is smaller, and the latter has larger degree, so the whole influence condition of the defect in the area is largerComprises the following steps:
comparing the area difference between the corresponding connected domains in the standard image, wherein the error is normal within 3 percent of the total area of the standard image due to the statistical error, and the condition that the pattern is lack of printing or is over printing is considered to exist if the error exceeds 3 percent, so that the overall influence condition of the defects of the regions is as followsExpressed as follows:
the above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.
Claims (5)
1. A printing data identification method based on big data processing comprises the following steps: segmenting the acquired RGB image by using a semantic segmentation technology to obtain a presswork image; step two: processing the standard image and the printing image to obtain respective image descriptions; the method comprises the following steps: carrying out edge detection on the printed matter image to obtain a corresponding edge image; extracting a closed connected domain in the edge image; setting initial parameters, printing each connected domain of imageRespectively setting corresponding initial values, wherein the initial values comprise the number N =0 of pixels and the number L =0 of layers; limit coordinates are:;
because the pixels of different images are different in size, the area parameter of the connected domain is represented by the ratio of the number of the pixels of the connected domain to the total number of the pixels of the whole image, that is, the area parameter S of the connected domain:
wherein N is the number of pixels of the connected domain,is the image of the whole imageThe number of elements;
traversing pixel points of the image line by line to obtain: the pixel points of each row in the image have corresponding connected domain label sequences, and the form is as follows:
(ii) a Wherein 0 is background pixel, namely non-connected domain pixel, the number outside 0 is label number of corresponding connected domain, process the label value, obtain the hierarchical information of connected domain under this row, because a connected domain is a closed area, so pass from left to right on a row of pixel of the picture, its label number will appear at least twice, first for starting to enter this connected domain, second for leaving this connected domain, there is connected domain of nested structure, must be that the big connected domain includes the small connected domain, so the nested number of layers of connected domain will not change once determining, if it is determined that the number of layers of connected domain will not change onceIndicates that the nesting layer number of the corresponding connected domain is not determined whenIn the process, the value of L does not need to be changed, a temporary variable C =0 is set, traversal is performed from left to right, the first non-0 digit is recorded, the corresponding digit in the tag sequence of the connected domain is 1, and the corresponding C is set to be 1, so that the nested layer number of the connected domain with the tag number of 1 is reduced, and the nested layer number of the connected domain with the tag number of 1 is reducedTherefore, the value of the nesting layer number L of the corresponding connected component is updated to C, which indicates that the maximum nesting layer number of the connected component is 1, and the recorded non-0 tag sequence is(ii) a A second non-0 digit, the corresponding digit in the above-mentioned connected domain tag sequence being 3, the digit 3 not being present in the recorded tag sequence, recording the sequenceNumber, the resulting tag sequence isAt this time orderIndicating entry into a further nested connected region, the largest nesting level of connected regions due to tag number 3UpdateThe maximum nesting layer number of the connected domain with the label number of 3 is 2; the third non-0 digit is 2, and the digit 2 is not recorded in the recorded non-0 label sequence, and the non-0 digit label sequence is the label sequence at this momentNumber 2 of nested layers of connected domains due to tag number 2Let us orderThe maximum nesting layer number of the connected domain with the label number of 2 is 3; continuing the traversal, the fourth traversal has a non-0 digit of 2 due to the previously recorded sequence of digits2 already exists in the sequence, which means that the traversal of the connected domain with the label of 2 is finished and is not recorded into the label sequence; subtracting 1 from C, wherein C =2, namely the traversed pixel points are located in a connected domain with a nesting level of 2, and so on, when a new connected domain is encountered, namely a label number which does not exist in the recorded non-0 digital label sequence, adding 1 to C to indicate that the embedded domain enters a deeper layerIn the jacket region; every time a connected domain is left, C is reduced by 1, and the nested region of the previous layer is returned; in addition, when adding 1 to C each time, it needs to judge whether the nesting level L corresponding to the label number is 0 or not, until the pixels in the row are traversed, and the number of pixels corresponding to the label is increased each time one pixel point with the label is traversed(ii) a Comparing the pixel point coordinate with the label with the maximum value and the minimum value of the existing horizontal and vertical coordinates:,,,performing a comparison comprising: the coordinates of the pixel points of the label traversed by a certain point areIf, ifThen, thenOtherwiseKeeping the same; if it isThen, thenOtherwiseKeeping the same; to pairThe same process is carried out; after traversing, obtaining the coordinates of the central pointWherein(ii) a To obtainAnd the respective number of nesting layers L; calculating the integral characteristic value of each connected domainAnd expressing the distance from the coordinates of the center point of the connected domain to the origin and the area of the connected domain:
Step three: and comparing the image descriptions of the standard image and the printing image to judge the abnormal printing condition.
2. The big data processing-based print data identification method according to claim 1, wherein performing edge detection on the print image to obtain a corresponding edge image specifically comprises: inputting a printing image, carrying out gray processing on the image, and carrying out edge detection on the image by using a Canny operator to obtain a gradient edge, namely the edge of the pattern in the printing area.
3. The big data processing-based printing data identification method according to claim 1, wherein extracting the closed connected components in the edge image specifically comprises: analyzing the connected domain of the edge of the pattern in the printing area by using a seed filling method to obtain the connected domain with different labelsAnd obtaining the value of the maximum tag numberI.e. the total number.
4. The big data processing-based print data identification method according to claim 1, wherein the description of the print image obtained by the combination of the descriptions of the connected component is specifically: the description of the whole image is represented as
Similarly, the above operations are repeated for the standard image to obtain the image description data of the standard image
5. The big data processing-based printing data identification method according to claim 1, wherein the third step is specifically: traversing the connected domains of the standard image and the printed image, grouping the connected domains according to the layer number L, and respectively calculating the number corresponding to each level; comparing the corresponding number of each level of the standard image and the printing image, wherein the number is consistent to be a normal condition, and the levels with inconsistent number need to further detect the connected domain in the level: setting corresponding connected domain flags in standard imagesIntegral feature differenceCorresponding connected domain marks in printed imagesSearching the corresponding relation among all connected domains of the hierarchy: searching the integral characteristic value of a certain connected domain of the printing image and each connected domain in the standard imageThe difference between:
the difference being minimal, i.e.Then, the two related connected domains are the label numbers corresponding to the printing imagesCorresponding connected domain ofAnd standard drawingImage corresponding label numberThe corresponding connected domain of (a);
if the standard image connected domain markThat is, the connected component in the standard image at this time has no corresponding connected componentUpdateThe value of (c):
if the standard image connected domain mark,Indicating that the connected component in the standard image currently has a corresponding connected component in the printed imageComparing the connected component of the standard imageAnd corresponding connected domainCorresponding difference valueThe size of (2):
Wherein,for corresponding connected domainIs marked with a connected domain of (c),in order to be the number of the label,for corresponding connected domainA connected domain flag of (c);
if it isObtaining corresponding connected domains in the printed imageThe area difference with the standard image connected domain; obtaining corresponding connected domainsThe area difference with the standard image connected domain; if corresponding connected domainCorresponding area difference relative to corresponding connected domainIf the difference in area is small, the updating is performedA value of (d); otherwise, it is not updatedA value of (d);
after traversing is finished, corresponding connected domain marks of connected domains in printed imagesThere are two cases:;
the connected domain does not exist in the standard image, namely a defect area, the position of the defect is a pattern inner part and a pattern outer part blank area, and the overall influence condition of the defect of the areaComprises the following steps:
comparing the area difference between the corresponding connected domains in the standard image, wherein the error is normal within 3 percent of the total area of the standard image due to the statistical error, and the condition that the pattern is lack of printing or is over printing is considered to exist if the error exceeds 3 percent, so that the overall influence condition of the defects of the regions is as followsExpressed as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111063256.3A CN113506297B (en) | 2021-09-10 | 2021-09-10 | Printing data identification method based on big data processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111063256.3A CN113506297B (en) | 2021-09-10 | 2021-09-10 | Printing data identification method based on big data processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113506297A CN113506297A (en) | 2021-10-15 |
CN113506297B true CN113506297B (en) | 2021-12-03 |
Family
ID=78017145
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111063256.3A Active CN113506297B (en) | 2021-09-10 | 2021-09-10 | Printing data identification method based on big data processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113506297B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118134908B (en) * | 2024-04-30 | 2024-07-12 | 陕西博越腾达科技有限责任公司 | Printing monitoring image analysis method for 3D printing |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030035653A1 (en) * | 2001-08-20 | 2003-02-20 | Lyon Richard F. | Storage and processing service network for unrendered image data |
CN109308700A (en) * | 2017-07-27 | 2019-02-05 | 南京敏光视觉智能科技有限公司 | A kind of visual identity defect inspection method based on printed matter character |
CN111242896A (en) * | 2019-12-31 | 2020-06-05 | 电子科技大学 | Color printing label defect detection and quality rating method |
-
2021
- 2021-09-10 CN CN202111063256.3A patent/CN113506297B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113506297A (en) | 2021-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113344857B (en) | Defect detection network training method, defect detection method and storage medium | |
CN111275697B (en) | Battery silk-screen quality detection method based on ORB feature matching and LK optical flow method | |
CN109472271B (en) | Printed circuit board image contour extraction method and device | |
JP2000137804A (en) | Method and system for abnormality detection of digital image and storage medium for same | |
CN111242896A (en) | Color printing label defect detection and quality rating method | |
CN113034488B (en) | Visual inspection method for ink-jet printed matter | |
CN114202543B (en) | Method, device, equipment and medium for detecting dirt defects of PCB (printed circuit board) | |
CN113034492B (en) | Printing quality defect detection method and storage medium | |
CN114387269B (en) | Fiber yarn defect detection method based on laser | |
CN111861990A (en) | Method, system and storage medium for detecting bad appearance of product | |
CN110569774B (en) | Automatic line graph image digitalization method based on image processing and pattern recognition | |
US11580758B2 (en) | Method for processing image, electronic device, and storage medium | |
CN113506297B (en) | Printing data identification method based on big data processing | |
CN110533660B (en) | Method for detecting silk-screen defect of electronic product shell | |
CN112861861A (en) | Method and device for identifying nixie tube text and electronic equipment | |
CN113392819B (en) | Batch academic image automatic segmentation and labeling device and method | |
CN112884741B (en) | Printing apparent defect detection method based on image similarity comparison | |
CN114187247A (en) | Ampoule bottle printing character defect detection method based on image registration | |
CN117115171B (en) | Slight bright point defect detection method applied to subway LCD display screen | |
CN116091503B (en) | Method, device, equipment and medium for discriminating panel foreign matter defects | |
CN111798429B (en) | Visual inspection method for defects of printed matter | |
CN116664817A (en) | Power device state change detection method based on image difference | |
CN115546141A (en) | Small sample Mini LED defect detection method and system based on multi-dimensional measurement | |
CN113850756A (en) | Label defect detection method based on template comparison | |
JP2004094427A (en) | Slip image processor and program for realizing the same device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |