CN114841998A - Artificial intelligence-based packaging printing abnormity monitoring method - Google Patents

Artificial intelligence-based packaging printing abnormity monitoring method Download PDF

Info

Publication number
CN114841998A
CN114841998A CN202210763023.2A CN202210763023A CN114841998A CN 114841998 A CN114841998 A CN 114841998A CN 202210763023 A CN202210763023 A CN 202210763023A CN 114841998 A CN114841998 A CN 114841998A
Authority
CN
China
Prior art keywords
image
window
pixel
rgbd
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210763023.2A
Other languages
Chinese (zh)
Other versions
CN114841998B (en
Inventor
王典震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jialu Packaging Technology Co ltd
Original Assignee
Shandong Jialu Packaging Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jialu Packaging Technology Co ltd filed Critical Shandong Jialu Packaging Technology Co ltd
Priority to CN202210763023.2A priority Critical patent/CN114841998B/en
Publication of CN114841998A publication Critical patent/CN114841998A/en
Application granted granted Critical
Publication of CN114841998B publication Critical patent/CN114841998B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30144Printing quality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of data identification and processing, in particular to a packaging printing abnormity monitoring method based on artificial intelligence. According to the invention, the RGBD images of the surface under different set illumination angles are obtained, and corresponding data identification and processing are carried out on the images, so that the surface abnormal points of the packaging and printing product can be accurately identified finally, and the problem of inaccurate detection result caused by manual printing quality detection of the packaging and printing product is effectively solved.

Description

Artificial intelligence-based packaging printing abnormity monitoring method
Technical Field
The invention relates to the technical field of data identification and processing, in particular to a package printing abnormity monitoring method based on artificial intelligence.
Background
Along with the improvement of aesthetic requirements of people, more and more packaging printed products not only require that the planar design has aesthetic feeling, but also require the texture and luster of concave-convex fluctuation, and in order to meet the requirements, the packaging printed products often need to be subjected to finishing processes such as gold stamping, bulging, UV (ultraviolet) and die cutting after the printing is finished. Since high-quality packaging printing is an important way and means to improve the added value of goods and enhance the competitiveness of goods, there is a need to monitor the printing quality of packaging printed products.
In order to realize quality monitoring of a packaged printed product, a manual visual detection mode is usually adopted at present, but the manual visual detection mode has the problems of strong subjectivity, high fatigue tendency, high possibility of being interfered by illumination and the like, so that the problems of low detection speed, inaccurate detection result and the like are caused.
Disclosure of Invention
The invention aims to provide a package printing abnormity monitoring method based on artificial intelligence, which is used for solving the problem of inaccurate detection result caused by manual detection of the printing quality of a package printing product.
In order to solve the technical problem, the invention provides a package printing abnormity monitoring method based on artificial intelligence, which comprises the following steps of:
acquiring surface RGBD images of a to-be-monitored packaging printing product and a standard packaging printing product under n different set illumination angles respectively, and further acquiring 2n preprocessed surface RGBD images;
inputting 2n preprocessed surface RGBD images into a packaging printing abnormity monitoring network, wherein an embedded layer of the packaging printing abnormity monitoring network carries out data identification according to the 2n preprocessed surface RGBD images, determines a combined image corresponding to a packaging printing product to be monitored and a standard packaging printing product, and sends the combined image to an attention block layer of the packaging printing abnormity monitoring network;
the attention block layer of the package printing abnormity monitoring network comprises all attention blocks which are connected in sequence, each attention block forwards a combined image sent by an embedded layer or a previous attention block to a windowing pyramid of the package printing abnormity monitoring network, the windowing pyramid carries out windowing on the combined image according to an image format corresponding to the corresponding attention block to obtain each window image, and each window image is sent to the corresponding attention block; each attention block carries out data processing on each received window image to obtain a combined image after data processing, the combined image after data processing is sent to the next attention block, and the combined image after data processing is sent to the classifier of the package printing abnormity monitoring network by the last attention block;
the classifier of the package printing abnormity monitoring network receives the merged image after data processing sent by the last attention block, and determines an abnormal pixel binary image of a package printing product to be monitored according to the merged image after data processing;
and determining surface abnormal points of the packaging printing product to be monitored according to the abnormal pixel binary image of the packaging printing product to be monitored.
Further, the step of determining a merged image corresponding to the packaged printed product to be monitored and the standard packaged printed product comprises:
determining each pixel component of each pixel point in the n preprocessed surface RGBD images corresponding to the packaged printed product to be monitored according to the n preprocessed surface RGBD images corresponding to the packaged printed product to be monitored;
respectively splicing pixel components of n pixel points at the same position in n preprocessed surface RGBD images corresponding to a packaging printing product to be monitored, and correspondingly obtaining color variation vectors corresponding to the n pixel points at the same position;
determining each pixel component of each pixel point in the n preprocessed surface RGBD images corresponding to the standard packaging printed product according to the n preprocessed surface RGBD images corresponding to the standard packaging printed product;
respectively splicing pixel components of n pixel points at the same position in n preprocessed surface RGBD images corresponding to a standard packaging printing product, and correspondingly obtaining color variation vectors corresponding to the n pixel points at the same position;
splicing color variation vectors corresponding to n pixel points at the same position in n preprocessed surface RGBD images corresponding to the packaged printed product to be monitored, color variation vectors corresponding to n pixel points at the same position in n preprocessed surface RGBD images corresponding to the standard packaged printed product, and initialized classification vectors corresponding to the same position to correspondingly obtain pixel vectors corresponding to pixel points at the same position in 2n preprocessed surface RGBD images;
and constructing a combined image corresponding to the packaging printed product to be monitored and the standard packaging printed product, wherein the pixel value of each pixel point in the combined image is a pixel vector corresponding to the pixel point at the same position in the 2n preprocessed surface RGBD images.
Further, the attention block layer of the package printing anomaly monitoring network comprises four attention blocks connected in sequence, and the image formats corresponding to the four attention blocks are as follows: the number of windows corresponding to the first attention block is 1, the number of windows corresponding to the second attention block is 4, the number of windows corresponding to the third attention block is 16, the number of windows corresponding to the fourth attention block is 64, and the number of cells in each window corresponding to the four attention blocks is the same.
Further, the calculation formula corresponding to the number of cells in each window corresponding to the four attention blocks is as follows:
Figure DEST_PATH_IMAGE001
wherein,Nthe number of cells in each window for the four attention blocks,
Figure 459999DEST_PATH_IMAGE002
and
Figure 163381DEST_PATH_IMAGE003
the resolution of the combined image.
Further, each attention block performs data processing on each window image received by the attention block, including:
determining the illumination sensitivity corresponding to each window image according to the pixel value of each pixel point in each window image;
acquiring binary images corresponding to m platemaking files corresponding to a packaging printing product to be monitored, and determining binary sub-images corresponding to each window image in the binary images corresponding to the m platemaking files;
determining the process complexity corresponding to each window image according to the number of foreground pixel points of each window image in the binary sub-image corresponding to the binary image corresponding to the m plate-making files;
acquiring target surface RGBD images in n preprocessed surface RGBD images corresponding to a standard packaging printing product, and determining corresponding target surface RGBD sub-images of the window images in the target surface RGBD images by combining the window images;
determining the texture complexity corresponding to each window image according to the corresponding target surface RGBD sub-image of each window image in the target surface RGBD image;
determining the number of multiple heads corresponding to each window image according to the illumination sensitivity, the process complexity and the texture complexity corresponding to each window image;
and carrying out data processing on each window image according to the number of the multiple heads corresponding to each window image by each attention block.
Further, the step of determining the illumination sensitivity corresponding to each window image includes:
calculating the average value of the pixel values of a set number of randomly selected pixel points in each window image according to the pixel value of each pixel point in each window image, thereby correspondingly obtaining the average value of the pixel values corresponding to each window image;
determining the minimum RGB value and the maximum RGB value in the average values of the pixel values according to the average values of the pixel values corresponding to the window images, so as to correspondingly obtain the minimum RGB value and the maximum RGB value corresponding to the window images;
and calculating the Euclidean distance between the minimum RGB value and the maximum RGB value corresponding to each window image according to the minimum RGB value and the maximum RGB value corresponding to each window image, so as to correspondingly obtain the illumination sensitivity corresponding to each window image.
Further, the step of determining the process complexity corresponding to each window image comprises:
determining the ratio of the number of foreground pixel points of each window image in the binary sub-image corresponding to the binary image corresponding to the m plate-making files to all the pixel points in the binary sub-image corresponding to the m plate-making files according to the number of the foreground pixel points of each window image in the binary sub-image corresponding to the m plate-making files;
and determining the number of the ratio values which are larger than a set ratio threshold value and correspond to each window image according to the ratio of the number of foreground pixel points in the binary sub-image corresponding to each window image in the binary image corresponding to the m plate-making files to all the pixel points in the corresponding binary sub-image, so as to correspondingly obtain the process complexity corresponding to each window image.
Further, the step of determining the texture complexity corresponding to each window image includes:
determining a depth map and a color map corresponding to the RGBD sub-image of the target surface according to the RGBD sub-image of the target surface corresponding to each window image in the RGBD image of the target surface, and further determining a gray scale map corresponding to the color map;
determining a depth co-occurrence matrix corresponding to the depth map and a gray level co-occurrence matrix corresponding to the gray level map according to the depth map and the gray level map corresponding to each window image, and calculating an entropy value of the depth co-occurrence matrix and an entropy value of the gray level co-occurrence matrix;
and respectively calculating the sum of the entropy of the depth co-occurrence matrix corresponding to each window image and the entropy of the gray level co-occurrence matrix, so as to correspondingly obtain the texture complexity corresponding to each window image.
Further, the calculation formula for determining the number of multiple heads corresponding to each window image is as follows:
Figure 801036DEST_PATH_IMAGE004
)
wherein,
Figure 199656DEST_PATH_IMAGE005
the number of the multiple heads corresponding to each window image,
Figure 267162DEST_PATH_IMAGE006
is the minimum value of the number of the heads of the multi-head,
Figure 841363DEST_PATH_IMAGE007
Figure 649919DEST_PATH_IMAGE008
and
Figure 129311DEST_PATH_IMAGE009
the illumination sensitivity, the process complexity and the texture complexity corresponding to each window image respectively,
Figure 720829DEST_PATH_IMAGE010
Figure 602066DEST_PATH_IMAGE011
and
Figure 584453DEST_PATH_IMAGE012
the weights of the illumination sensitivity, the process complexity and the texture complexity corresponding to each window image respectively,
Figure 144616DEST_PATH_IMAGE013
is a rounding function.
Further, the step of determining surface anomaly points of the packaged printed product to be monitored comprises:
and according to the abnormal pixel binary image corresponding to the merged image, taking the pixel point with the pixel value of 0 in the abnormal pixel binary image as the surface abnormal point of the packaging printing product to be monitored.
The invention has the following beneficial effects: according to the method, the RGBD images of the surfaces of the package printed product to be monitored and the standard package printed product under different set illumination angles are obtained, the preprocessed images are further obtained, the preprocessed images are input into a pre-constructed package printing abnormity monitoring network, the corresponding combined image is constructed by the embedded layer of the network, the combined image is subjected to corresponding data processing by the attention block layer and the windowing pyramid of the network, an abnormal pixel binary image is obtained, and the surface abnormity points of the package printed product to be monitored are finally obtained. According to the method, when the combined image is constructed, the color difference of the same point of the packaging printing product to be monitored under different illumination angles is fully considered, and the difference of the surface RGBD image of the packaging printing product to be monitored and the surface RGBD image of the standard packaging printing product under different set illumination angles is also considered, so that the characteristics of each point of the packaging printing product to be monitored can be accurately extracted, the identification result of the printing quality is more accurate, and the problem of inaccurate detection result caused by manual printing quality detection of the packaging printing product is effectively solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of an artificial intelligence based package printing anomaly monitoring method in an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a color change detection apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a package printing anomaly monitoring network according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of the last three attention blocks in the embodiment of the present invention;
fig. 5 is a schematic structural diagram of a first attention block in the embodiment of the present invention.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects of the technical solutions according to the present invention will be given with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
Because the gloss sense standard of the material is not a fixed value, for the gold stamping areas of the packaging and printing products on the same production line, even if the gold stamping areas meet the production standard, the gold stamping areas can show different reflection effects and color distributions under the same light source and the same angle. Because the gold stamping area can change color along with the change of the angle of the light source, if the gold stamping area is white at the moment, and the bottom color of the paperboard is also white, the area is easily identified as a background area by mistake, and in the convex area with the bulging process, the phenomenon is more easily caused due to the existence of diffuse reflection. Therefore, when the printing quality of the package printing product is identified, the color difference of each pixel point in the image of the package printing product under different illumination angles needs to be considered, the difference between the pixel point and the corresponding pixel point in the standard image and various information of surrounding pixels of the pixel point need to be considered, and therefore the identification is more accurate.
Based on the above analysis, in order to accurately identify the printing quality of the package printing product, the present embodiment provides an artificial intelligence-based package printing anomaly monitoring method, and a flowchart corresponding to the method is shown in fig. 1, and includes the following steps:
step S1: the method comprises the steps of obtaining surface RGBD images of a to-be-monitored packaging printing product and a standard packaging printing product under n different set illumination angles respectively, and further obtaining 2n preprocessed surface RGBD images.
In this embodiment, the packaged printed product to be monitored is a printed paperboard, and in order to obtain RGBD images of the surface of the packaged printed product to be monitored at different set illumination angles, the present embodiment provides a color change detection device, a schematic structural diagram of which is shown in fig. 2, and the color change detection device includes an RGBD camera 1, an industrial-grade light source 2, and a semicircular sliding track 3. Wherein, the RGBD camera 1 and the semicircular sliding rail 3 are both disposed above the package printed product transportation rail (the package printed product transportation rail is not shown in fig. 2), and the RGBD camera 1 and the semicircular sliding rail 3 are disposed in a staggered manner. The industrial grade light source 2 is arranged on the sliding track 3, the industrial grade light source 2 can slide along the sliding track 3, in the sliding process, the industrial grade light source 2 is always aligned with the central position of a packaging printed product to be monitored, which is placed on the packaging printed product transportation track, and the sliding speed of the industrial grade light source 2 is set according to the actual production requirement.
When the print quality of the packaged printed products needs to be monitored, the packaged printed products are transported to a designated location through the packaging printed product transportation track and are temporarily stopped at the designated location. When the packaged printed product stays at the designated position, the RGBD camera 1 of the color change detection device is located right above the center of the packaged printed product, and the plane of the sliding rail 3 of the color change detection device is perpendicular to the plane of the packaged printed product. During the package printed product stays at the appointed position, the industrial light source 2 of the color change detection device slides to the other end from one end of the sliding track 3, in the process, the industrial light source 2 is always aligned to the center position of the package printed product, and meanwhile, the RGBD camera 1 conducts shooting and sampling on the package printed product according to the set sampling frequency, so that the surface RGBD image of the package printed product to be monitored under n different set illumination angles is obtained.
In addition, in order to subsequently realize the recognition of the printing quality of the packaging printed product to be monitored, namely recognize abnormal printing points existing on the packaging printed product to be monitored, for the standard packaging printed product which has the same batch with the packaging printed product to be monitored and does not have the problem of printing quality, according to the method for acquiring the surface RGBD images of the packaging printed product to be monitored under n different set illumination angles, the surface RGBD images of the standard packaging printed product under n different set illumination angles are acquired simultaneously. It should be noted that the above-mentioned color change detecting apparatus merely provides a specific device to obtain the RGBD images of the surfaces of the packaged printed product to be monitored and the standard packaged printed product under n different set illumination angles, respectively, as another embodiment, on the basis of being able to obtain these RGBD images, other suitable devices in the prior art may also be adopted.
After obtaining the surface RGBD images of the packaging printing product to be monitored and the standard packaging printing product under n different set illumination angles, preprocessing the surface RGBD images through image editing means such as expansion, cutting and the like, converting two values of the resolution of the surface RGBD images into multiples of 64 so as to be convenient for subsequent windowing of the images, and recording the resolution of the preprocessed surface RGBD images as
Figure 743088DEST_PATH_IMAGE014
After the 2n preprocessed surface RGBD images corresponding to the package printed product to be monitored and the standard package printed product are obtained in step S1, in order to subsequently realize the print quality identification of the package printed product to be monitored, a package print anomaly monitoring network is constructed in this embodiment, which is substantially a Shine switch-Transformer (SST) neural network based on Swin Transformer, and the network has 2n preprocessed surface RGBD images as input and outputs an anomalous pixel binary image. The general structure of the network is shown in the dashed box part in fig. 3, and includes a windowing pyramid, and an embedding layer, an attention block layer, and a normalized exponential function classifier connected in series in sequence, and the following describes each structure of the network and the principle of each structure operation in detail.
Windowing pyramid:
in order to obtain the deep-level features of each pixel of multiple scales and multiple resolutions, a multi-scale and multi-resolution blocking mechanism is constructed on the basis of a windowing pyramid and four attention blocks. In the blocking mechanism, image formats corresponding to four attention blocks are set through a windowing pyramid, windowing processing is performed on an image according to the corresponding image formats, generally, downsampling is performed on the image firstly, namely unit fusion (Patch Merging), then upsampling is performed step by step, namely unit splitting (Patch dividing), and finally an abnormal pixel binary image is output.
Attention block 1 corresponds to the image format: to perform downsampling, i.e. unit fusion (Patch Merging) on the image, the two sides are selected as the length of the two sides from the lower right of the image
Figure 478832DEST_PATH_IMAGE015
The square of pixels, that is, each 64 pixels is used as a class a unit Patch, so as to convert the square of pixels into an image with lower resolution. Then, the whole image is selected as a class A window, namely the number of windows corresponding to the attention block 1 is 1, and the quantity of class A units contained in the class A window is
Figure 770136DEST_PATH_IMAGE016
The corresponding calculation formula is:
Figure 227049DEST_PATH_IMAGE001
wherein,
Figure 363633DEST_PATH_IMAGE002
and
Figure 94828DEST_PATH_IMAGE003
is the resolution of the image of the input attention block 1.
Attention block 2 corresponds to the image format: for this purpose, a level a unit of the attention block 1 is divided into a horizontal half and a vertical half, and one level a unit generates 4 square level b units with two sides of 4 × 4 pixels. And transversely and vertically dividing the first-level window of the attention block 1, dividing the image into 4 second-level windows by one first-level window, wherein each second-level window comprises N second-level units, so that the same vector group format is ensured for the operation of each subsequent attention block.
Attention block 3 corresponds to the image format: for this purpose, each unit of the attention block 2 is divided into two halves, namely, each second 4 square third-level units with two sides of 2 × 2 pixels are generated by one second-level unit. And the second-level window of the attention block 2 is also divided into a horizontal division and a vertical division, one second-level window generates 4 third-level windows, at the moment, the image is divided into 16 third-level windows, and each third-level window also comprises N third-level units, so that the same vector group format of the subsequent operation of each attention block is ensured.
Attention block 4 corresponds to an image format: the image is upsampled, i.e. the image is subjected to element splitting (Patch segmentation), and in order to achieve this purpose, each element of the attention block 3 is divided into horizontal and vertical halves, and one third-order element is divided into square third-order elements of 4 single pixels. And 3, carrying out horizontal bisection and vertical bisection on the third-level window, carrying out secondary 4 third-level windows on one third-level window, dividing the image into 64 third-level windows at the moment, wherein each third-level window also comprises N third-level units, and thus ensuring that the vector group formats of the subsequent operation of each attention block are the same.
The window-dividing pyramid determines image formats corresponding to the four attention blocks based on hierarchical blocks in Swin-Transformer, blocks the image according to the image formats corresponding to the four attention blocks, and only performs an attention mechanism on a window where each unit in the attention block is located when the four attention blocks are subsequently subjected to image processing, so that the calculation efficiency is greatly improved. The hierarchical partitioning in the Swin-Transformer is performed from bottom to top, and the windowing pyramid in the scheme is performed from top to bottom, so that the subsequent calculation amount is further reduced, and the real-time performance is enhanced. On the other hand, the final cell division is accurate to the level of a single pixel, so that the neural network can finally obtain the recognition accuracy at the pixel level.
Embedding layer:
for n preprocessed surface RGBD images corresponding to the packaged printed product to be monitored, arranging the n preprocessed surface RGBD images in a time sequence to obtain a first imageiTaking the preprocessed surface RGBD image as an example, each pixel point has
Figure 540722DEST_PATH_IMAGE017
Depth values of pixel points at the same position in four pixel components, n preprocessed surface RGBD images
Figure 498314DEST_PATH_IMAGE018
Are all the same. According to the time sequence, carrying out Embedding operation, namely reducing the relation of the image on the time sequence into the characteristics of pixel points on the image to obtain the color variation vectors arranged as follows:
Figure 422276DEST_PATH_IMAGE019
wherein,
Figure 883345DEST_PATH_IMAGE020
and corresponding color change vectors for n pixel points at the same position in n preprocessed surface RGBD images corresponding to the packaged printing product to be monitored.
For n preprocessed surface RGBD images corresponding to the standard packaging printed product, arranging the n preprocessed surface RGBD images according to a time sequence, and then obtaining the color variation vectors corresponding to the n pixel points at the same position in the n preprocessed surface RGBD images corresponding to the standard packaging printed product according to the same mode
Figure 768649DEST_PATH_IMAGE021
According to the color variation vectors corresponding to the n pixel points at the same position in the n preprocessed surface RGBD images corresponding to the packaged printing product to be monitored
Figure 682378DEST_PATH_IMAGE020
And color variation vectors corresponding to n pixel points at the same position in n preprocessed surface RGBD images corresponding to the standard packaging printed product
Figure 144452DEST_PATH_IMAGE021
Constructing a pixel vector
Figure 991185DEST_PATH_IMAGE022
Wherein
Figure 513302DEST_PATH_IMAGE023
The initialized classification vector is obtained by random initialization, and the random initialization process belongs to the prior art and is not described herein again. According to the mode, n preprocessed surface RGBD images corresponding to the packaged printed product to be monitored are combined with n preprocessed surface RGBD images corresponding to the standard packaged printed product, so that pixel vectors corresponding to pixel points at the same positions in the 2n preprocessed surface RGBD images can be obtained
Figure 445486DEST_PATH_IMAGE024
Constructing a combined image corresponding to the packaging printed product to be monitored and the standard packaging printed product, wherein the size of the combined image is the same as that of the 2n preprocessed surface RGBD images, and each pixel point in the combined image is uniquely corresponding to a pixel vector corresponding to a pixel point at the same position in the 2n preprocessed surface RGBD images
Figure 711251DEST_PATH_IMAGE024
To take this as the follow-up attentionMinimum unit of force mechanism calculation.
Combining the n preprocessed surface RGBD images corresponding to the packaged printed product to be monitored with the n preprocessed surface RGBD images corresponding to the standard packaged printed product, and finally determining the reason why the combined image corresponding to the packaged printed product to be monitored and the standard packaged printed product is that: under the condition that the glossiness meets the requirement, the color change vector corresponding to the package printed product to be monitored can not be fixed, and the direct difference between the color change vector corresponding to the package printed product to be monitored and the color change vector corresponding to the standard package printed product can not be used as the basis for matching the traditional template.
Attention block layer:
the attention block layer comprises four attention blocks connected in sequence, as shown in fig. 4, an attention block 2, an attention block 3 and an attention block 4 have the same neural network structure and are divided into a front sub-block and a rear sub-block, a dashed box represents one sub-block, as shown in fig. 5, the attention block 1 deletes the window analysis and the rear sub-block in the front sub-block on the basis of the structure shown in fig. 4. For four attention blocks in series, namely attention block 1, attention block 2, attention block 3 and attention block 4, the latter one accumulates the features extracted by the former one, and the window and cell sizes are also changing. Compared with the attention block in the conventional Swin transform, the adaptive multi-head attention mechanism in the four attention blocks is the core of the scheme, the number of multi-heads corresponding to each window image needs to be determined adaptively, and other structures except for window analysis are completely the same as those in the conventional Swin transform, wherein the process of constructing the tensor matrix is mainly to perform LayerNorm, namely jump addition, and the data which is not subjected to attention mechanism and the data which is subjected to attention mechanism are added, so that the concept of residual error connection is realized, and the method belongs to the known technology. The window analysis and adaptive multi-head attention mechanism will be described in detail below.
Window analysis:
for the attention block 1, no window analysis is required since its corresponding image format is only one window containing all the features of the packaged printed product. For the attention block 2, the attention block 3 and the attention block 4, because the corresponding image formats are that the image formats include a plurality of windows, that is, the input images of the attention block 2, the attention block 3 and the attention block 4 are all of the plurality of windows, the contents in some windows are only the background area of the packaged printed product, and there is no complex printing process, so that the local feature analysis is performed on each window to reduce the calculation amount of the window with uncomplicated contents, which is specifically implemented as follows:
firstly, plate-making files such as gold stamping, UV, bulging and the like of the packaging printed product in the same batch with the packaging printed product to be detected are obtained, and m plate-making files are arranged, namely m processes are arranged on the packaging printed product to be detected. The m platemaking files are black-white binary images, the foreground is black, the corresponding pixel value is 1, the background is white, the corresponding pixel value is 0, and the length and the width of the m platemaking files are the same as the surface RGBD image after pretreatment.
In any one of the attention block 2, the attention block 3, and the attention block 4, the following three indexes are analyzed for each window image:
(1) sensitivity to light
Randomly extracting p pixel points in the window image, setting p =4 in this embodiment, solving an average vector of pixel vectors corresponding to the p pixel points, and searching a minimum RGB value and a maximum RGB value in the average vector, where the minimum RGB value is recorded as the minimum RGB value
Figure 146912DEST_PATH_IMAGE025
Maximum RGB value is noted
Figure 237666DEST_PATH_IMAGE026
Determining the Euclidean distance between the minimum RGB value and the maximum RGB valueLThe corresponding calculation formula is:
Figure 906414DEST_PATH_IMAGE027
euclidean distance between minimum RGB value and maximum RGB value calculatedLThen, the Euclidean distance is measuredLAs the illumination sensitivity of the window image. Since many printed materials will show different colors in the image with the change of the illumination angle, the maximum difference value of the RGB values changing with time is used to measure the degree of the change of all the pixel points in the window image caused by the illumination,Lthe smaller the value is, the larger the change of the illumination angle is, the larger the change of RGB of all pixel points of the window image is difficult to cause, and then the smaller the attention is paid;Lthe larger the size is, the larger the change of the illumination angle can cause the RGB of all the pixels of the window image to be greatly changed, and subsequently, more attention needs to be paid to the window image.
(2) Complexity of the process
According to the binary images corresponding to the m plate-making files, the parts, belonging to the window image, of the binary images corresponding to the m plate-making files are segmented, the proportion of foreground pixels of each plate-making file in the window image to all pixels of the whole window image is calculated, and the rejection proportion is smaller than a set ratio threshold value
Figure 726602DEST_PATH_IMAGE028
The embodiment setting
Figure DEST_PATH_IMAGE029
And the number of the rest platemaking files is the process complexity corresponding to the window image. The process complexity corresponding to the window image is used for measuring the process types superposed in the window image, and the subsequent process with small area occupation ratio is abandoned, so that the calculated amount is reduced.
(3) Texture complexity
For n preprocessed surface RGBD images corresponding to packaged and printed products to be monitored, selecting the preprocessed surface RGBD image at the middle moment as a target surface RGBD sub-image, separating a depth value and an RGB value in the target surface RGBD sub-image to obtain a depth map and a color map, and converting the color map into a gray map by using a weighted average method based on human eye cognition.
Entropy of depth co-occurrence matrix for depth maps using known techniques
Figure 469299DEST_PATH_IMAGE030
Similarly, the entropy of the gray level co-occurrence matrix of the gray level image is obtained
Figure 333219DEST_PATH_IMAGE031
Entropy is a measure of randomness, and is larger when all elements in the co-occurrence matrix have the largest randomness and all values in the spatial co-occurrence matrix are almost equal and the elements in the co-occurrence matrix are distributed dispersedly. Thus, entropy characterizes the degree of non-uniformity or complexity of texture in an image, the greater the entropy, the more non-uniform or complex the texture of the image. Entropy according to depth co-occurrence matrix
Figure 708836DEST_PATH_IMAGE030
Entropy of sum-gray level co-occurrence matrix
Figure 319334DEST_PATH_IMAGE031
Calculating the sum of two entropy values
Figure 729587DEST_PATH_IMAGE032
And taking the sum of the two entropy values as the corresponding texture complexity of the window image.
Constructing a tensor matrix:
for any one of the attention blocks 1, 2, 3 and 4, when each window image is processed, the pixel vectors of all the pixel points in each unit in the window image are spliced in the order from left to right and then from top to bottom to obtain the feature tensor. Feature sheets for each cell of each window imageQuantity, transposing the original characteristic tensor which is transversely arranged, and then transversely splicing the units from left to right and then from top to bottom to obtain a tensor matrix
Figure 764408DEST_PATH_IMAGE033
This is the unit of attention.
It should be noted that the above description only briefly describes the process of constructing the tensor matrix, and since the specific implementation process of constructing the tensor matrix is exactly the same as the specific implementation process of constructing the tensor matrix in the Swin Transformer in the prior art, detailed description is not given here.
Self-adaptive multi-head attention mechanism:
the general purpose of the attention mechanism is: for each unit of each window image, the neighborhood of each unit is all the units in the window image where the unit is located, namely each unit (including the unit) in the window image where the unit is located is a neighbor of the unit, the redundant chord similarity of the feature tensor of each unit and the feature tensor of each neighbor in the neighborhood is calculated to obtain the weight, then the weighted summation is carried out, so that the original feature tensor is updated, namely each pixel vector in the unit is updated, wherein the classification vector in the pixel vector is updated
Figure 627321DEST_PATH_IMAGE023
The local information of the window in which the cell is located is recorded. In short, the output of the pixel vector after attention mechanism is an updated pixel vector, which perceives the information in the window image.
Multiple head means: the tensor matrix is mapped to prior attention mechanism
Figure 38580DEST_PATH_IMAGE033
Duplicate as Q, K, V, then use the quantity ashMultiplying the dimensionality reduction matrix of the group by Q, K and V respectively, and taking the obtained product as an attention mechanism
Figure 303339DEST_PATH_IMAGE034
Input of a function, thereby obtaininghAnd (3) group output:
Figure 774641DEST_PATH_IMAGE035
wherein,
Figure 124851DEST_PATH_IMAGE036
is composed of
Figure 77151DEST_PATH_IMAGE034
First of a functioniThe output of the group is carried out,
Figure 461996DEST_PATH_IMAGE037
is a firstiAnd (5) grouping the dimensionality reduction matrix.
In obtaininghAfter the group output, the h group output is then spliced using the Concat splicing operation and using the ascending matrix
Figure 104198DEST_PATH_IMAGE038
Restoring the splicing result output by the h groups, thereby obtaining a matrix obtained by multi-head splicing:
Figure 941704DEST_PATH_IMAGE039
it should be noted that, the implementation process of the adaptive multi-head attention mechanism of the above attention block is basically the same as the implementation process of the multi-head attention mechanism of the attention block in the Swin Transformer in the prior art, and the differences are that: in the conventional implementation process of the multi-head attention mechanism of the attention block in the Swin Transformer, the number of heads of the multi-head, that is, the group number h of the dimensionality reduction matrix, is a preset fixed value, while in the implementation process of the above adaptive multi-head attention mechanism of the attention block, the number of heads of the multi-head, that is, the group number h of the dimensionality reduction matrix, is determined by the result of the window analysis, and the corresponding calculation formula is as follows:
Figure 694766DEST_PATH_IMAGE004
)
wherein,
Figure 668538DEST_PATH_IMAGE005
the number of the multiple heads corresponding to each window image,
Figure 481642DEST_PATH_IMAGE006
for the minimum value of the number of heads, the present embodiment sets
Figure 806444DEST_PATH_IMAGE040
Figure 241492DEST_PATH_IMAGE007
Figure 584618DEST_PATH_IMAGE008
And
Figure 319355DEST_PATH_IMAGE009
the illumination sensitivity, the process complexity and the texture complexity corresponding to each window image respectively,
Figure 115142DEST_PATH_IMAGE010
Figure 226317DEST_PATH_IMAGE011
and
Figure 689529DEST_PATH_IMAGE012
the weight of the illumination sensitivity, the process complexity and the texture complexity corresponding to each window image is set by the embodiment
Figure 595168DEST_PATH_IMAGE041
Figure 158478DEST_PATH_IMAGE013
Is a rounding function.
The self-adaptive multi-head attention mechanism is equivalent to the characteristic extraction in multiple directions, generally speaking, the more the number of heads is, the more complete the characteristic extraction is, but the excessive number of heads can increase the calculation amount, so that the method can self-adaptively select the number of the multi-heads according to the characteristics of each window image, and effectively reduce the calculation amount while ensuring the accuracy of the characteristic extraction.
The attention mechanism of the sliding window is as follows:
the sliding window attention mechanism enables each unit in each window image to change a perception domain, so that global information is perceived, and the sliding window is subjected to unified window analysis, so that the calculated amount is further reduced. Since the specific implementation process of the sliding window attention mechanism in the attention block is completely the same as the specific implementation process of the sliding window attention mechanism in the attention block in the Swin Transformer in the prior art, details are not described here.
Multilayer perceptron MLP:
since the specific working process of the multi-layer sensor MLP in the attention block is completely the same as that of the multi-layer sensor MLP in the attention block in the Swin Transformer in the prior art, the detailed description is omitted here.
Normalized exponential function classifier:
the input of the normalized index function classifier is the merged image output by the attention block 4, and the normalized index function classifier outputs an abnormal pixel binary image corresponding to the merged image according to the classification vector [ CLS ] of the pixel vector corresponding to each pixel point in the merged image. In the abnormal pixel binary image, a pixel point having a pixel value of 0 is an abnormal point, and a point having a pixel value of 1 is a normal point.
Step S2: inputting the 2n preprocessed surface RGBD images into a packaging printing abnormity monitoring network, carrying out data identification on an embedded layer of the packaging printing abnormity monitoring network according to the 2n preprocessed surface RGBD images, determining a combined image corresponding to a packaging printing product to be monitored and a standard packaging printing product, and sending the combined image to an attention block layer of the packaging printing abnormity monitoring network.
According to the above description of the structures and the working principle of the structures in the package printing anomaly monitoring network, it can be seen that, after the 2n preprocessed surface RGBD images obtained in step S1 are input into the package printing anomaly monitoring network, the embedded layer of the package printing anomaly monitoring network determines corresponding combined images according to the 2n preprocessed surface RGBD images, and the implementation process is as follows:
step S21: and determining each pixel component of each pixel point in the n preprocessed surface RGBD images corresponding to the packaged printed product to be monitored according to the n preprocessed surface RGBD images corresponding to the packaged printed product to be monitored.
Step S22: respectively splicing the pixel components of n pixel points at the same position in n preprocessed surface RGBD images corresponding to the packaged and printed product to be monitored, and correspondingly obtaining the color variation vector corresponding to the n pixel points at the same position.
Step S23: and determining each pixel component of each pixel point in the n preprocessed surface RGBD images corresponding to the standard packaging printed product according to the n preprocessed surface RGBD images corresponding to the standard packaging printed product.
Step S24: respectively splicing the pixel components of n pixel points at the same position in n preprocessed surface RGBD images corresponding to a standard packaging printing product, and correspondingly obtaining the color variation vector corresponding to the n pixel points at the same position.
Step S25: splicing the color variation vectors corresponding to the n pixel points at the same position in the n preprocessed surface RGBD images corresponding to the packaged printed product to be monitored, the color variation vectors corresponding to the n pixel points at the same position in the n preprocessed surface RGBD images corresponding to the standard packaged printed product, and the initialized classification vectors corresponding to the same position to correspondingly obtain the pixel vectors corresponding to the pixel points at the same position in the 2n preprocessed surface RGBD images.
Step S26: and constructing a combined image corresponding to the packaging printed product to be monitored and the standard packaging printed product, wherein the pixel value of each pixel point in the combined image is a pixel vector corresponding to the pixel point at the same position in the 2n preprocessed surface RGBD images.
Since the specific process of determining the corresponding merged image in the steps S21-S26 has been described in detail in the working principle of the embedded layer in the package printing anomaly monitoring network, and will not be described herein again. After the embedded layer of the package printing anomaly monitoring network obtains the corresponding merged image, the embedded layer sends the merged image to the attention block layer of the package printing anomaly monitoring network.
Step S3: the attention block layer of the package printing abnormity monitoring network comprises all attention blocks which are connected in sequence, each attention block forwards a combined image sent by an embedded layer or a previous attention block to a windowing pyramid of the package printing abnormity monitoring network, the windowing pyramid carries out windowing on the combined image according to an image format corresponding to the corresponding attention block to obtain each window image, and each window image is sent to the corresponding attention block; each attention block carries out data processing on each received window image to obtain a combined image after data processing, the combined image after data processing is sent to the next attention block, and the combined image after data processing is sent to the classifier of the package printing abnormity monitoring network by the last attention block.
As can be seen from the specific structure of the attention block layer in the package printing abnormality monitoring network, in this embodiment, the attention block layer sequentially connects four attention blocks, namely, an attention block 1, an attention block 2, an attention block 3, and an attention block 4, where the attention block 1 receives the merged image sent by the embedded layer and sends the merged image to the windowing pyramid of the package printing abnormality monitoring network, and the windowing pyramid performs windowing on the merged image according to the image format corresponding to the attention block 1 to obtain each window image, and sends each window image to the attention block 1. The attention block 1 processes the received window images to obtain a processed combined image, and sends the processed combined image to the attention block 2. The attention block 2 receives the combined image sent by the attention block 1, and sends the combined image to a windowing pyramid of a package printing anomaly monitoring network, the windowing pyramid performs windowing processing on the combined image according to an image format corresponding to the attention block 2 to obtain each window image, and each window image is sent to the attention block 2. The attention block 2 processes the received window images to obtain a processed combined image, and sends the processed combined image to the attention block 3. And the attention block 3 and the attention block 4 operate in the same manner in sequence, finally, the attention block 4 obtains a final processed combined image, and the final processed combined image is sent to the classifier of the package printing abnormity monitoring network.
According to the image formats corresponding to the attention blocks in the attention block layer arranged in the windowing pyramid in the package printing abnormity monitoring network, the attention block layer of the package printing abnormity monitoring network comprises four attention blocks which are sequentially connected, and the image formats corresponding to the four attention blocks are as follows: the number of windows corresponding to the first attention block is 1, the number of windows corresponding to the second attention block is 4, the number of windows corresponding to the third attention block is 16, the number of windows corresponding to the fourth attention block is 64, and the number of units in each window corresponding to the four attention blocks is the same, which are all 1
Figure 807765DEST_PATH_IMAGE016
The corresponding calculation formula is:
Figure 266428DEST_PATH_IMAGE001
wherein,Nfor the number of cells in each window corresponding to four attention blocks,
Figure 123395DEST_PATH_IMAGE002
and
Figure 910085DEST_PATH_IMAGE003
the resolution of the combined image.
For each attention block in an attention block layer in a package printing anomaly monitoring network, the attention block processes each window image received by the attention block, and the processing comprises the following steps:
step S31: determining the illumination sensitivity corresponding to each window image according to the pixel value of each pixel point in each window image, wherein the implementation steps comprise:
step S311: and calculating the average value of the pixel values of the set number of randomly selected pixels in each window image according to the pixel value of each pixel in each window image, thereby correspondingly obtaining the average value of the pixel values corresponding to each window image.
Step S312: and determining the minimum RGB value and the maximum RGB value in the average values of the pixel values according to the average values of the pixel values corresponding to the window images, so as to correspondingly obtain the minimum RGB value and the maximum RGB value corresponding to the window images.
Step S313: and calculating the Euclidean distance between the minimum RGB value and the maximum RGB value corresponding to each window image according to the minimum RGB value and the maximum RGB value corresponding to each window image, so as to correspondingly obtain the illumination sensitivity corresponding to each window image.
Step S32: acquiring binary images corresponding to m platemaking files corresponding to the packaging printing product to be monitored, and determining binary sub-images corresponding to the window images in the binary images corresponding to the m platemaking files.
Step S33: determining the process complexity corresponding to each window image according to the number of foreground pixel points of each window image in the binary sub-image corresponding to the binary image corresponding to the m platemaking files, wherein the implementation steps comprise:
step S331: and determining the ratio of the number of foreground pixel points of each window image in the binary sub-image corresponding to the binary image corresponding to the m plate-making files to all the pixel points in the corresponding binary sub-image according to the number of the foreground pixel points of each window image in the binary sub-image corresponding to the m plate-making files.
Step S332: and determining the number of the ratio values which are larger than a set ratio threshold value and correspond to each window image according to the ratio of the number of foreground pixel points in the binary sub-image corresponding to each window image in the binary image corresponding to the m plate-making files to all the pixel points in the corresponding binary sub-image, so as to correspondingly obtain the process complexity corresponding to each window image.
Step S34: and acquiring target surface RGBD images in the n preprocessed surface RGBD images corresponding to the standard packaging printing product, and determining corresponding target surface RGBD sub-images of the window images in the target surface RGBD images by combining the window images.
Step S35: determining the texture complexity corresponding to each window image according to the target surface RGBD sub-image corresponding to each window image in the target surface RGBD image, wherein the implementation steps comprise:
step S351: and determining a depth map and a color map corresponding to the RGBD sub-image of the target surface according to the RGBD sub-image of the target surface corresponding to each window image in the RGBD image of the target surface, and further determining a gray scale map corresponding to the color map.
Step S352: and determining a depth co-occurrence matrix corresponding to the depth map and a gray level co-occurrence matrix corresponding to the gray level map according to the depth map and the gray level map corresponding to each window image, and calculating an entropy value of the depth co-occurrence matrix and an entropy value of the gray level co-occurrence matrix.
Step S353: and respectively calculating the sum of the entropy value of the depth co-occurrence matrix and the entropy value of the gray level co-occurrence matrix corresponding to each window image, thereby correspondingly obtaining the texture complexity corresponding to each window image.
Step S36: determining the number of multiple heads corresponding to each window image according to the illumination sensitivity, the process complexity and the texture complexity corresponding to each window image, wherein the corresponding calculation formula is as follows:
Figure 346752DEST_PATH_IMAGE004
)
wherein,
Figure 662851DEST_PATH_IMAGE005
the number of the multiple heads corresponding to each window image,
Figure 910293DEST_PATH_IMAGE006
is the minimum value of the number of the heads of the multi-head,
Figure DEST_PATH_IMAGE043
Figure 371230DEST_PATH_IMAGE008
and
Figure 877167DEST_PATH_IMAGE009
the illumination sensitivity, the process complexity and the texture complexity corresponding to each window image respectively,
Figure 654630DEST_PATH_IMAGE010
Figure 322240DEST_PATH_IMAGE011
and
Figure 552364DEST_PATH_IMAGE012
the weights of the illumination sensitivity, the process complexity and the texture complexity corresponding to each window image respectively,
Figure 5867DEST_PATH_IMAGE013
is a rounding function.
Step S37: and carrying out data processing on each window image according to the number of the multiple heads corresponding to each window image by each attention block.
Since each attention block in the above steps S31-S37 determines the number of multiple heads corresponding to each window image, and the specific process of performing data processing on the corresponding window image according to the determined number of multiple heads, detailed descriptions have been given in the working principles of the modules in the package printing anomaly monitoring network, such as window analysis, tensor matrix construction, adaptive multiple head attention mechanism, and the like, and are not described herein again.
Step S4: and the classifier of the package printing abnormity monitoring network receives the merged image after data processing sent by the last attention block, and determines an abnormal pixel binary image of a package printing product to be monitored according to the merged image after data processing.
Since the specific implementation process of determining the abnormal pixel binary image by the classifier of the package printing abnormality monitoring network is described in detail in the working principle of the normalized exponential function classifier in the package printing abnormality monitoring network, details are not described here.
Step S5: and determining surface abnormal points of the packaging printing product to be monitored according to the abnormal pixel binary image of the packaging printing product to be monitored.
Specifically, according to the abnormal pixel binary image corresponding to the merged image, the pixel point with the pixel value of 0 in the abnormal pixel binary image is used as the surface abnormal point of the package printing product to be monitored, and the pixel point with the pixel value of 1 in the abnormal pixel binary image is used as the surface normal point of the package printing product to be monitored, so that the printing quality monitoring of the package printing product to be monitored is finally realized.
It should be noted that, in the above method for monitoring package printing anomaly based on artificial intelligence, the used network for monitoring package printing anomaly is trained in advance, and the corresponding training process includes:
selecting the packaging printed products of the same batch with the packaging printed products to be monitored, acquiring a large number of surface RGBD images of the packaging printed products with abnormal points under n different set illumination angles respectively, a large number of surface RGBD images of the packaging printed products without abnormal points under n different set illumination angles respectively by using a color change detection device and surface RGBD images of the standard packaging printed products under n different set illumination angles respectively according to the mode of the step S1, wherein the surface RGBD images under n different set illumination angles corresponding to one packaging printed product with abnormal points or without abnormal points form an image data sample, and increasing the number of the image data samples corresponding to the packaging printed products with abnormal points by using a data enhancement technology so as to achieve the identification accuracy capable of adapting to various conditions, all the above image data samples together constitute an image data set.
In the image data sets, 80% of image data samples are randomly selected as a training set, 20% of image data samples are selected as a testing set, surface RGBD images of standard package printing products under n different set illumination angles are combined, after corresponding image preprocessing is carried out, a package printing abnormity monitoring network is trained, a cross entropy loss function is used for the package printing abnormity monitoring network, Adam is used for an optimizer, and finally the package printing abnormity monitoring network capable of accurately monitoring abnormal pixel points of the package printing products is obtained. Since the specific training process of the packaging printing anomaly monitoring network belongs to the common knowledge of those skilled in the art, the detailed description is omitted here.
After the training of the packaging printing anomaly monitoring network is completed, the packaging printing product to be monitored obtained in the step S1 and 2n preprocessed surface RGBD images corresponding to the standard packaging printing product are input into the network, and an abnormal pixel binary image is output by the network, so that an expert can quickly locate the position of the printing anomaly, and subsequent processing is facilitated.
According to the method, the RGBD images of the surfaces of the package printing product to be monitored and the standard package printing product under different set illumination angles are obtained, the images are preprocessed, the preprocessed images are input into a pre-constructed package printing abnormity monitoring network, a corresponding combined image is constructed by an embedded layer of the network, the combined image is processed by an attention block layer and a windowing pyramid of the network, an abnormal pixel binary image is obtained, and the surface abnormity points of the package printing product to be monitored are finally obtained. According to the method, when the combined image is obtained, the color difference of the same point of the to-be-monitored packaging printing product under different illumination angles is fully considered, and the difference of the surface RGBD image of the to-be-monitored packaging printing product and the surface RGBD image of the standard packaging printing product under different set illumination angles is also considered, so that the characteristics of each point of the to-be-monitored packaging printing product can be accurately extracted, the identification is more accurate, and the problem that the detection result is inaccurate due to manual printing quality detection of the packaging printing product is effectively solved. Secondly, the windowing pyramid of the package printing abnormity monitoring network divides the merged image into blocks from top to bottom, so that the subsequent calculation amount is further reduced, and the real-time performance is enhanced. Thirdly, each attention block in the packaging printing abnormity monitoring network adaptively determines the number of multiple heads corresponding to each window image by performing feature analysis on each window image, so that the calculation amount of the network can be effectively reduced while the accuracy of feature extraction is ensured.
It should be noted that: the above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A package printing abnormity monitoring method based on artificial intelligence is characterized by comprising the following steps:
acquiring surface RGBD images of a to-be-monitored packaging printing product and a standard packaging printing product under n different set illumination angles respectively, and further acquiring 2n preprocessed surface RGBD images;
inputting 2n preprocessed surface RGBD images into a packaging printing abnormity monitoring network, wherein an embedded layer of the packaging printing abnormity monitoring network carries out data identification according to the 2n preprocessed surface RGBD images, determines a combined image corresponding to a packaging printing product to be monitored and a standard packaging printing product, and sends the combined image to an attention block layer of the packaging printing abnormity monitoring network;
the attention block layer of the package printing abnormity monitoring network comprises all attention blocks which are connected in sequence, each attention block forwards a combined image sent by an embedded layer or a previous attention block to a windowing pyramid of the package printing abnormity monitoring network, the windowing pyramid carries out windowing on the combined image according to an image format corresponding to the corresponding attention block to obtain each window image, and each window image is sent to the corresponding attention block; each attention block carries out data processing on each received window image to obtain a combined image after data processing, the combined image after data processing is sent to the next attention block, and the combined image after data processing is sent to the classifier of the package printing abnormity monitoring network by the last attention block;
the classifier of the package printing abnormity monitoring network receives the merged image after data processing sent by the last attention block, and determines an abnormal pixel binary image of a package printing product to be monitored according to the merged image after data processing;
and determining surface abnormal points of the packaging printing product to be monitored according to the abnormal pixel binary image of the packaging printing product to be monitored.
2. The artificial intelligence based packaging printing anomaly monitoring method according to claim 1, wherein the step of determining a merged image corresponding to the packaging printed product to be monitored and the standard packaging printed product comprises:
determining each pixel component of each pixel point in the n preprocessed surface RGBD images corresponding to the packaged printed product to be monitored according to the n preprocessed surface RGBD images corresponding to the packaged printed product to be monitored;
respectively splicing pixel components of n pixel points at the same position in n preprocessed surface RGBD images corresponding to a packaging printing product to be monitored, and correspondingly obtaining color variation vectors corresponding to the n pixel points at the same position;
determining each pixel component of each pixel point in the n preprocessed surface RGBD images corresponding to the standard packaging printed product according to the n preprocessed surface RGBD images corresponding to the standard packaging printed product;
respectively splicing pixel components of n pixel points at the same position in n preprocessed surface RGBD images corresponding to a standard packaging printing product, and correspondingly obtaining color variation vectors corresponding to the n pixel points at the same position;
splicing color variation vectors corresponding to n pixel points at the same position in n preprocessed surface RGBD images corresponding to the packaged printed product to be monitored, color variation vectors corresponding to n pixel points at the same position in n preprocessed surface RGBD images corresponding to the standard packaged printed product, and initialized classification vectors corresponding to the same position to correspondingly obtain pixel vectors corresponding to pixel points at the same position in 2n preprocessed surface RGBD images;
and constructing a combined image corresponding to the packaging printed product to be monitored and the standard packaging printed product, wherein the pixel value of each pixel point in the combined image is a pixel vector corresponding to the pixel point at the same position in the 2n preprocessed surface RGBD images.
3. The method for monitoring the abnormal package printing based on the artificial intelligence as claimed in claim 2, wherein the attention block layer of the network for monitoring the abnormal package printing comprises four attention blocks connected in sequence, and the image formats corresponding to the four attention blocks are as follows: the number of windows corresponding to the first attention block is 1, the number of windows corresponding to the second attention block is 4, the number of windows corresponding to the third attention block is 16, the number of windows corresponding to the fourth attention block is 64, and the number of cells in each window corresponding to the four attention blocks is the same.
4. The method for monitoring the printing anomaly of the artificial intelligence-based package as claimed in claim 3, wherein the calculation formula corresponding to the number of the units in each window corresponding to the four attention blocks is as follows:
Figure 285923DEST_PATH_IMAGE001
wherein,Nis four attention blocksThe corresponding number of cells in each window,
Figure 392944DEST_PATH_IMAGE002
and
Figure 31736DEST_PATH_IMAGE003
the resolution of the combined image.
5. The artificial intelligence based package printing anomaly monitoring method according to claim 3, wherein each attention block performs data processing on each window image received by each attention block, comprising:
determining the illumination sensitivity corresponding to each window image according to the pixel value of each pixel point in each window image;
acquiring binary images corresponding to m platemaking files corresponding to a packaging printing product to be monitored, and determining binary sub-images corresponding to each window image in the binary images corresponding to the m platemaking files;
determining the process complexity corresponding to each window image according to the number of foreground pixel points of each window image in the binary sub-image corresponding to the binary image corresponding to the m plate-making files;
acquiring target surface RGBD images in n preprocessed surface RGBD images corresponding to a standard packaging printing product, and determining corresponding target surface RGBD sub-images of the window images in the target surface RGBD images by combining the window images;
determining the texture complexity corresponding to each window image according to the corresponding target surface RGBD sub-image of each window image in the target surface RGBD image;
determining the number of multiple heads corresponding to each window image according to the illumination sensitivity, the process complexity and the texture complexity corresponding to each window image;
and carrying out data processing on each window image according to the number of the multiple heads corresponding to each window image by each attention block.
6. The artificial intelligence based packaging printing anomaly monitoring method according to claim 5, wherein the step of determining the illumination sensitivity corresponding to each window image comprises:
calculating the average value of the pixel values of a set number of randomly selected pixel points in each window image according to the pixel value of each pixel point in each window image, thereby correspondingly obtaining the average value of the pixel values corresponding to each window image;
determining the minimum RGB value and the maximum RGB value in the average values of the pixel values according to the average values of the pixel values corresponding to the window images, so as to correspondingly obtain the minimum RGB value and the maximum RGB value corresponding to the window images;
and calculating the Euclidean distance between the minimum RGB value and the maximum RGB value corresponding to each window image according to the minimum RGB value and the maximum RGB value corresponding to each window image, so as to correspondingly obtain the illumination sensitivity corresponding to each window image.
7. The artificial intelligence based packaging printing anomaly monitoring method according to claim 5, wherein the step of determining a process complexity corresponding to each window image includes:
determining the ratio of the number of foreground pixel points of each window image in the binary sub-image corresponding to the binary image corresponding to the m plate-making files to all the pixel points in the binary sub-image corresponding to the m plate-making files according to the number of the foreground pixel points of each window image in the binary sub-image corresponding to the m plate-making files;
and determining the number of the ratio values which are larger than a set ratio threshold value and correspond to each window image according to the ratio of the number of foreground pixel points in the binary sub-image corresponding to each window image in the binary image corresponding to the m plate-making files to all the pixel points in the corresponding binary sub-image, so as to correspondingly obtain the process complexity corresponding to each window image.
8. The artificial intelligence based packaging printing anomaly monitoring method according to claim 5, wherein the step of determining the texture complexity corresponding to each window image comprises:
determining a depth map and a color map corresponding to the RGBD sub-image of the target surface according to the RGBD sub-image of the target surface corresponding to each window image in the RGBD image of the target surface, and further determining a gray scale map corresponding to the color map;
determining a depth co-occurrence matrix corresponding to the depth map and a gray level co-occurrence matrix corresponding to the gray level map according to the depth map and the gray level map corresponding to each window image, and calculating an entropy value of the depth co-occurrence matrix and an entropy value of the gray level co-occurrence matrix;
and respectively calculating the sum of the entropy value of the depth co-occurrence matrix and the entropy value of the gray level co-occurrence matrix corresponding to each window image, thereby correspondingly obtaining the texture complexity corresponding to each window image.
9. The method for monitoring the printing anomaly of the package based on the artificial intelligence as claimed in claim 5, wherein the calculation formula for determining the number of the multiple heads corresponding to each window image is as follows:
Figure 630207DEST_PATH_IMAGE004
)
wherein,
Figure 631530DEST_PATH_IMAGE005
the number of the multiple heads corresponding to each window image,
Figure 657255DEST_PATH_IMAGE006
is the minimum value of the number of the heads of the multi-head,
Figure 111239DEST_PATH_IMAGE007
Figure 247822DEST_PATH_IMAGE008
and
Figure 247527DEST_PATH_IMAGE009
corresponding to each window imageThe sensitivity to light, process complexity and texture complexity,
Figure 224579DEST_PATH_IMAGE010
Figure 916591DEST_PATH_IMAGE011
and
Figure 106133DEST_PATH_IMAGE012
the weights of the illumination sensitivity, the process complexity and the texture complexity corresponding to each window image respectively,
Figure 567202DEST_PATH_IMAGE013
is a rounding function.
10. The artificial intelligence based packaging printing anomaly monitoring method according to claim 1, wherein the step of determining surface anomaly points of the packaging printed product to be monitored comprises:
and according to the abnormal pixel binary image corresponding to the merged image, taking the pixel point with the pixel value of 0 in the abnormal pixel binary image as the surface abnormal point of the packaging printing product to be monitored.
CN202210763023.2A 2022-07-01 2022-07-01 Artificial intelligence-based packaging printing abnormity monitoring method Active CN114841998B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210763023.2A CN114841998B (en) 2022-07-01 2022-07-01 Artificial intelligence-based packaging printing abnormity monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210763023.2A CN114841998B (en) 2022-07-01 2022-07-01 Artificial intelligence-based packaging printing abnormity monitoring method

Publications (2)

Publication Number Publication Date
CN114841998A true CN114841998A (en) 2022-08-02
CN114841998B CN114841998B (en) 2022-09-06

Family

ID=82574009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210763023.2A Active CN114841998B (en) 2022-07-01 2022-07-01 Artificial intelligence-based packaging printing abnormity monitoring method

Country Status (1)

Country Link
CN (1) CN114841998B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953367A (en) * 2022-12-16 2023-04-11 华联机械集团有限公司 Sealing machine abnormity detection method and system based on visual identification

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102205712A (en) * 2010-03-24 2011-10-05 精工爱普生株式会社 Printing device and printing method
CN111242896A (en) * 2019-12-31 2020-06-05 电子科技大学 Color printing label defect detection and quality rating method
CN111524117A (en) * 2020-04-20 2020-08-11 南京航空航天大学 Tunnel surface defect detection method based on characteristic pyramid network
US20210374939A1 (en) * 2020-05-28 2021-12-02 Konica Minolta, Inc. Inspection device, image forming system, inspection method, and inspection program
CN114170208A (en) * 2021-12-14 2022-03-11 武汉福旺家包装有限公司 Paper product defect detection method based on artificial intelligence

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102205712A (en) * 2010-03-24 2011-10-05 精工爱普生株式会社 Printing device and printing method
CN111242896A (en) * 2019-12-31 2020-06-05 电子科技大学 Color printing label defect detection and quality rating method
CN111524117A (en) * 2020-04-20 2020-08-11 南京航空航天大学 Tunnel surface defect detection method based on characteristic pyramid network
US20210374939A1 (en) * 2020-05-28 2021-12-02 Konica Minolta, Inc. Inspection device, image forming system, inspection method, and inspection program
CN114170208A (en) * 2021-12-14 2022-03-11 武汉福旺家包装有限公司 Paper product defect detection method based on artificial intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡方尚等: "基于印刷缺陷检测的图像配准方法研究", 《光学技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953367A (en) * 2022-12-16 2023-04-11 华联机械集团有限公司 Sealing machine abnormity detection method and system based on visual identification
CN115953367B (en) * 2022-12-16 2023-07-25 华联机械集团有限公司 Sealing machine abnormality detection method and system based on visual recognition

Also Published As

Publication number Publication date
CN114841998B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
Guan et al. A steel surface defect recognition algorithm based on improved deep learning network model using feature visualization and quality evaluation
CN110059741B (en) Image recognition method based on semantic capsule fusion network
CN106056155B (en) Superpixel segmentation method based on boundary information fusion
CN106815604B (en) Method for viewing points detecting based on fusion of multi-layer information
CN111160249A (en) Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion
CN108491849A (en) Hyperspectral image classification method based on three-dimensional dense connection convolutional neural networks
CN105809205B (en) A kind of classification method and its system of high spectrum image
WO2021057069A1 (en) Computer-executable vehicle loss assessment method and apparatus
CN113688821B (en) OCR text recognition method based on deep learning
CN110287806A (en) A kind of traffic sign recognition method based on improvement SSD network
CN114037671B (en) Microscopic hyperspectral leukocyte detection method based on improvement FASTER RCNN
CN106355579A (en) Defect detecting method of cigarette carton surface wrinkles
CN112365451A (en) Method, device and equipment for determining image quality grade and computer readable medium
CN114841998B (en) Artificial intelligence-based packaging printing abnormity monitoring method
CN118037091A (en) Intelligent report working quality detection method and system based on computer vision technology
CN117372332A (en) Fabric flaw detection method based on improved YOLOv7 model
CN104933415B (en) A kind of visual remote sensing image cloud sector detection method in real time
CN117523394A (en) SAR vessel detection method based on aggregation characteristic enhancement network
CN110910497B (en) Method and system for realizing augmented reality map
CN112597798A (en) Method for identifying authenticity of commodity by using neural network
CN115909078A (en) Ship classification method based on HRRP and SAR data feature level fusion
CN112884741B (en) Printing apparent defect detection method based on image similarity comparison
KR20230147130A (en) Methods and apparatus for ranking images in a collection using image segmentation and image analysis
Tang et al. A new ore image segmentation method based on Swin-Unet
CN117876842A (en) Industrial product anomaly detection method and system based on generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant