CN115471498A - Multi-angle waterproof monitoring shoe making machine and method for rain shoe production - Google Patents

Multi-angle waterproof monitoring shoe making machine and method for rain shoe production Download PDF

Info

Publication number
CN115471498A
CN115471498A CN202211236175.3A CN202211236175A CN115471498A CN 115471498 A CN115471498 A CN 115471498A CN 202211236175 A CN202211236175 A CN 202211236175A CN 115471498 A CN115471498 A CN 115471498A
Authority
CN
China
Prior art keywords
feature map
branch
training
map
perception
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211236175.3A
Other languages
Chinese (zh)
Inventor
郑海涛
郑博文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou City Huawei Shoes Technology Co ltd
Original Assignee
Wenzhou City Huawei Shoes Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou City Huawei Shoes Technology Co ltd filed Critical Wenzhou City Huawei Shoes Technology Co ltd
Priority to CN202211236175.3A priority Critical patent/CN115471498A/en
Publication of CN115471498A publication Critical patent/CN115471498A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Testing Or Calibration Of Command Recording Devices (AREA)

Abstract

The application discloses a shoe making machine and a method for multi-angle waterproof monitoring of rain shoe production. The method comprises the steps of firstly calculating a first sound signal of an un-soaked rain boot to be detected and a time domain enhancement diagram of a second sound signal of the soaked and wiped rain boot to be detected to obtain a first time domain enhancement diagram and a second time domain enhancement diagram, then enabling the first time domain enhancement diagram and the second time domain enhancement diagram to pass through a twin network model to obtain a first characteristic diagram and a second characteristic diagram, then calculating a difference characteristic diagram between a first multi-scale perception characteristic diagram and a second multi-scale perception characteristic diagram obtained by enabling the first characteristic diagram and the second characteristic diagram to pass through a multi-branch perception domain module, and finally enabling the difference characteristic diagram to pass through a classifier to obtain a classification result for representing whether the waterproof performance of the rain boot to be detected meets a preset standard. Through such a mode, can shorten check-out time, improve the precision that rain shoes waterproof performance detected.

Description

Multi-angle waterproof monitoring shoe making machine and method for rain shoe production
Technical Field
The present application relates to the field of intelligent monitoring technology, and more particularly, to a shoe making machine and method for multi-angle waterproof monitoring of rain shoe production.
Background
The rain shoes are waterproof shoes, are worn when raining mostly, and the part also can wear the rain shoes at the workman that has more ponding place to carry out work, and the predominant action of rain shoes is waterproof, consequently, after the preparation of accomplishing the rain shoes, just need carry out waterproof test to the rain shoes.
At present when carrying out waterproof test to the rain shoes, the rain shoes are held by manual work mostly, then place the rain shoes in the water tank, then observe inside the rain shoes, whether have ponding inside looking over the rain shoes, and this kind of waterproof detection mode is unfavorable for observing and testing result is also not directly perceived.
Through research, the inventor of the application finds that when the rain shoes are subjected to the factory waterproof test, if the quality of the rain shoes is poor, the rain shoes are not detected when the factory waterproof test is carried out, and the reason is that the relatively poor rain shoes can only permeate into the rain shoes for a long time and are difficult to observe in a short time.
Therefore, an optimized rain shoe waterproof monitoring solution is desired.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides a shoe making machine and a method thereof for multi-angle waterproof monitoring of rain shoe production. The method comprises the steps of firstly calculating a first sound signal of an un-soaked rain boot to be detected and a time domain enhancement diagram of a second sound signal of the soaked and wiped rain boot to be detected to obtain a first time domain enhancement diagram and a second time domain enhancement diagram, then enabling the first time domain enhancement diagram and the second time domain enhancement diagram to pass through a twin network model to obtain a first characteristic diagram and a second characteristic diagram, then calculating a difference characteristic diagram between a first multi-scale perception characteristic diagram and a second multi-scale perception characteristic diagram obtained by enabling the first characteristic diagram and the second characteristic diagram to pass through a multi-branch perception domain module respectively, and finally enabling the difference characteristic diagram to pass through a classifier to obtain a classification result for representing whether the waterproof performance of the rain boot to be detected meets a preset standard or not. Through such a mode, can shorten check-out time, improve the precision that rain shoes waterproof performance detected.
According to an aspect of the present application, there is provided a shoe-making machine for multi-angle waterproof monitoring of rain shoe production, comprising:
the detection signal acquisition unit is used for acquiring a first sound signal of the un-soaked rainshoes to be detected and a second sound signal of the soaked and wiped rainshoes to be detected;
a time domain converting unit, configured to calculate time domain enhancement maps of the first sound signal and the second sound signal to obtain a first time domain enhancement map and a second time domain enhancement map;
a twin coding unit, configured to pass the first time-domain enhancement map and the second time-domain enhancement map through a twin network model including a first convolutional neural network and a second convolutional neural network to obtain a first feature map and a second feature map, where the first convolutional neural network and the second convolutional neural network have the same network structure;
the multi-scale perception unit is used for respectively enabling the first feature map and the second feature map to pass through a multi-branch perception domain module to obtain a first multi-scale perception feature map and a second multi-scale perception feature map;
a difference evaluation unit, configured to calculate a difference feature map between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map; and
and the waterproof monitoring result generating unit is used for enabling the differential characteristic diagram to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the waterproof performance of the rain boot to be detected meets a preset standard or not.
According to another aspect of the present application, there is provided a multi-angle waterproof monitoring method for rain boot production, which includes:
acquiring a first sound signal of a rainshoe to be detected which is not soaked and a second sound signal of the rainshoe to be detected which is soaked and wiped dry;
calculating time domain enhancement maps of the first sound signal and the second sound signal to obtain a first time domain enhancement map and a second time domain enhancement map;
enabling the first time domain enhancement map and the second time domain enhancement map to pass through a twin network model comprising a first convolutional neural network and a second convolutional neural network to obtain a first characteristic map and a second characteristic map, wherein the first convolutional neural network and the second convolutional neural network have the same network structure;
respectively enabling the first feature map and the second feature map to pass through a multi-branch perception domain module to obtain a first multi-scale perception feature map and a second multi-scale perception feature map;
calculating a differential feature map between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map; and
and passing the differential characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the waterproof performance of the rain shoes to be detected meets a preset standard or not.
Compared with the prior art, the application provides a shoe making machine and a method thereof for multi-angle waterproof monitoring of rain shoe production. The method comprises the steps of firstly calculating a first sound signal of an un-soaked rain boot to be detected and a time domain enhancement diagram of a second sound signal of the soaked and wiped rain boot to be detected to obtain a first time domain enhancement diagram and a second time domain enhancement diagram, then enabling the first time domain enhancement diagram and the second time domain enhancement diagram to pass through a twin network model to obtain a first characteristic diagram and a second characteristic diagram, then calculating a difference characteristic diagram between a first multi-scale perception characteristic diagram and a second multi-scale perception characteristic diagram obtained by enabling the first characteristic diagram and the second characteristic diagram to pass through a multi-branch perception domain module respectively, and finally enabling the difference characteristic diagram to pass through a classifier to obtain a classification result for representing whether the waterproof performance of the rain boot to be detected meets a preset standard or not. Through such a mode, can shorten check-out time, improve the precision that rain shoes waterproof performance detected.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 illustrates an application scenario diagram of a shoe making machine for multi-angle waterproof monitoring of rain shoe production according to an embodiment of the present application.
FIG. 2 illustrates a block diagram schematic view of a shoe-making machine for multi-angle waterproofing monitoring of rain shoe production according to an embodiment of the present application.
FIG. 3 illustrates a block diagram schematic view of a twin coding unit in a shoe making machine for multi-angle waterproof monitoring of rain shoe production according to an embodiment of the present application.
Fig. 4 illustrates a block diagram schematic view of a multi-scale sensing unit in a shoe making machine for multi-angle waterproofing monitoring of rain shoe production according to an embodiment of the present application.
FIG. 5 illustrates a block diagram schematic view of a multi-branch sensing subunit in a shoe making machine for multi-angle waterproofing monitoring of rain shoe production according to an embodiment of the present application.
FIG. 6 illustrates a block diagram schematic view of a training module further included in a shoe making machine for multi-angle waterproof monitoring of rain shoe production according to an embodiment of the present application.
FIG. 7 illustrates a flow chart of a multi-angle waterproofing monitoring method for rain shoe production according to an embodiment of the present application.
Fig. 8 illustrates a schematic diagram of a system architecture for a multi-angle waterproofing monitoring method for rain shoe production according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Overview of a scene
As above, when carrying out waterproof test to the rain shoes at present, the rain shoes are held by manual work mostly, then place the rain shoes in the water tank, then observe inside the rain shoes, look over whether inside ponding of rain shoes, and this kind of waterproof detection mode is unfavorable for observing and the testing result is also not directly perceived.
Through research, the inventor of the application finds that when the rain shoes are subjected to the factory waterproof test, if the quality of the rain shoes is poor, the rain shoes are not detected when the factory waterproof test is carried out, and the reason is that the relatively poor rain shoes can only permeate into the rain shoes for a long time and are difficult to observe in a short time. Therefore, an optimized rain shoe waterproof monitoring solution is desired.
Accordingly, the applicant of the present application has found that if the rain boot is relatively inferior, even if the water stain on the surface of the rain boot is subsequently wiped dry after the rain boot is soaked in the water tank, a part of water molecules will penetrate into the rain boot, resulting in a change in the acoustic characteristics of the rain boot, and thus, whether the waterproof performance of the rain boot meets the requirements can be judged through the acoustic characteristics of the soaked and wiped rain boot. Like this, can avoid needing to soak the rain shoes for a long time in traditional waterproof detection and observe the shoes interior state, and then shorten the time that detects to can also improve the precision to rain shoes waterproof performance detects.
Specifically, in the technical scheme of the application, the artificial intelligence detection technology based on deep learning is adopted to respectively extract the characteristics of the sound signals of the rainshoes to be detected which are not soaked and the sound signals of the rainshoes to be detected which are soaked and wiped dry, and the waterproof performance of the rainshoes to be detected is evaluated according to the difference characteristics of the two. That is, the waterproof performance detection scheme is constructed by a differential feature comparison between the acoustic features of the rain shoes after soaking and the acoustic features of the rain shoes that are not soaked.
Specifically, in the technical scheme of the application, first, a first sound signal of the rainshoes to be detected which are not soaked and a second sound signal of the rainshoes to be detected which are soaked and wiped are obtained. Next, when the waterproof performance of the rainshoes is detected according to the acoustic characteristics of the rainshoes, in the collection process of the sound signal, the sound signal of the rainshoes to be detected is weak due to interference of other factors such as external environment noise, and therefore, time domain enhancement needs to be further performed on the sound signal. That is, time-domain enhancement maps of the first sound signal and the second sound signal are calculated to obtain a first time-domain enhancement map and a second time-domain enhancement map.
Then, in consideration of the excellent performance of the convolutional neural network model in feature mining of the image, in the technical solution of the present application, deep feature mining is performed on the first time domain enhancement map and the second time domain enhancement map using the convolutional neural network model. Specifically, the first time domain enhancement map and the second time domain enhancement map are processed through a twin network model including a first convolutional neural network and a second convolutional neural network to extract feature distributions of local features in the first time domain enhancement map and the second time domain enhancement map in a high-dimensional space, so as to obtain a first feature map and a second feature map, where the first convolutional neural network and the second convolutional neural network have the same network structure.
It should be understood that, since convolution and pooling in a standard deep convolutional neural network are common downsampling operations, downsampling also reduces the feature map scale while enlarging the perceptual domain, resulting in information loss. And because the standard deep convolution neural network model uses a convolution kernel with a fixed size, when feature extraction is carried out, multi-scale feature information cannot be learned, and the problem of local information loss caused by a grid effect is caused.
In order to solve the above problem, in the technical solution of the present application, a multi-branch perceptual domain module is used to perform feature enhancement on the first feature map and the second feature map obtained by using a conventional convolutional neural network model as a feature extractor, so as to obtain a first multi-scale perceptual feature map and a second multi-scale perceptual feature map. Compared with the traditional convolutional neural network model, the multi-branch perceptual domain module has the following advantages: 1) The multi-branch perception domain module replaces a traditional convolution kernel by utilizing a cavity convolution, and utilizes a specific parameter expansion rate (di _ affected rates) of the multi-branch perception domain module to enable an original convolution kernel to have a larger perception domain under the same parameter number, namely, the multi-branch perception domain module can expand the perception domain by utilizing the cavity convolution, so that downsampling is not needed, information loss is avoided, and the input scale and the output scale of a feature map are consistent;
2) The multi-branch sensing domain module designs parallel cavity convolution structures with different expansion rates, so that a network can learn multi-scale characteristic information, and the problem of local information loss caused by a grid effect is solved. And the structure increases the amount of small target information available for target detection, thereby solving the problem that the small target information cannot be reconstructed due to the utilization of the pooling layer in the traditional convolutional neural network.
Further, a difference feature map between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map is calculated. In a specific example, a difference by position between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map may be calculated to obtain a difference feature map. In this way, the waterproof performance of the rainshoes to be detected can be evaluated by classifying the rainshoes to be detected that are not soaked and the rainshoes to be detected that are soaked and wiped dry by the difference characteristics between the acoustic characteristics.
Particularly, in the technical solution of the present application, since the differential feature map as the classification feature map is obtained by calculating a difference between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map, when a model is trained, and a gradient of a loss function is calculated and reversely propagated from a classifier to the model, the gradient may pass through a convolutional neural network and a multi-branch perceptual domain module respectively obtaining the first multi-scale perceptual feature map and the second multi-scale perceptual feature map, and at this time, a resolution of a feature pattern extracted by a cascade branch of the corresponding convolutional neural network and the multi-branch perceptual domain module may be caused by an abnormal gradient branch.
Thus, in addition to the classification loss function, a classification pattern digestion inhibition loss function is further introduced to solve the digestion of the extracted feature pattern, specifically expressed as:
Figure BDA0003882945250000061
V 1 and V 2 Respectively, the first multi-scale perception feature map and the second multi-scale perception feature map are spread to obtain feature vectors, and M is 1 And M 2 Classifier pair V respectively 1 And V 2 The weight matrix of (a) is determined,
Figure BDA0003882945250000062
representing the square of the two-norm of the vector.
That is, by introducing a classification pattern digestion inhibition loss function, the pseudo difference of the classifier weight is pushed to the real feature distribution difference of the first multi-scale perception feature map and the second multi-scale perception feature map, that is, the feature distribution of the differential feature map, so as to ensure that the directional derivative is regularized near a gradient branch point when the gradient is reversely propagated, that is, the gradient is subjected to over-weighting between the corresponding convolutional neural network and the cascade branches of the multi-branch perception domain module, so that the classification pattern digestion of the feature is inhibited, so as to improve the extraction capability of the classification features of the corresponding convolutional neural network and the cascade branches of the multi-branch perception domain module, and accordingly improve the accuracy of the classification result of the differential feature map. Therefore, the waterproof performance of the rain shoes can be evaluated and detected through the acoustic characteristics of the rain shoes to be detected, the problem that the detection time is too long due to long-time soaking in the traditional process is avoided, and the detection accuracy of the waterproof performance of the rain shoes can be improved.
Based on this, the application provides a shoemaking machine that is used for waterproof monitoring of multi-angle of rain shoes production, it includes: the detection signal acquisition unit is used for acquiring a first sound signal of the un-soaked rainshoes to be detected and a second sound signal of the soaked and wiped rainshoes to be detected; a time domain converting unit, configured to calculate time domain enhancement maps of the first sound signal and the second sound signal to obtain a first time domain enhancement map and a second time domain enhancement map; a twin coding unit, configured to pass the first time-domain enhancement map and the second time-domain enhancement map through a twin network model including a first convolutional neural network and a second convolutional neural network to obtain a first feature map and a second feature map, where the first convolutional neural network and the second convolutional neural network have the same network structure; the multi-scale perception unit is used for respectively enabling the first feature map and the second feature map to pass through a multi-branch perception domain module to obtain a first multi-scale perception feature map and a second multi-scale perception feature map; a difference evaluation unit for calculating a difference feature map between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map; and the waterproof monitoring result generating unit is used for enabling the differential characteristic diagram to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the waterproof performance of the rain shoes to be detected meets a preset standard or not.
Fig. 1 illustrates an application scenario diagram of a multi-angle waterproof monitoring shoe making machine for rain shoe production according to an embodiment of the application. As shown in fig. 1, in this application scenario, a first sound signal (e.g., D1 as illustrated in fig. 1) of a rainshoe to be detected (e.g., F1 as illustrated in fig. 1) that is not soaked and a second sound signal (e.g., D2 as illustrated in fig. 1) of the rainshoe to be detected (e.g., F2 as illustrated in fig. 1) that is soaked and wiped dry are obtained. Then, the acquired first sound signal and the acquired second sound signal are input into a server (for example, S illustrated in fig. 1) deployed with a multi-angle waterproof monitoring algorithm for rain shoe production, wherein the server can process the first sound signal and the second sound signal by using the multi-angle waterproof monitoring algorithm for rain shoe production to generate a classification result whether the waterproof performance of the rain shoe to be detected meets a predetermined standard.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary shoe-making machine
FIG. 2 illustrates a block diagram schematic view of a shoe-making machine for multi-angle waterproofing monitoring of rain shoe production according to an embodiment of the present application. As shown in fig. 2, a shoe making machine 100 for multi-angle waterproof monitoring of rain shoe production according to an embodiment of the present application includes: the detection signal acquisition unit 110 is configured to acquire a first sound signal of an un-soaked rainshoe to be detected and a second sound signal of the soaked and wiped rainshoe to be detected; a time domain converting unit 120, configured to calculate time domain enhancement maps of the first sound signal and the second sound signal to obtain a first time domain enhancement map and a second time domain enhancement map; a twin encoding unit 130, configured to pass the first time domain enhancement map and the second time domain enhancement map through a twin network model including a first convolutional neural network and a second convolutional neural network to obtain a first feature map and a second feature map, where the first convolutional neural network and the second convolutional neural network have the same network structure; a multi-scale perception unit 140, configured to pass the first feature map and the second feature map through a multi-branch perception domain module to obtain a first multi-scale perception feature map and a second multi-scale perception feature map; a difference evaluation unit 150, configured to calculate a difference feature map between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map; and a waterproof monitoring result generating unit 160, configured to pass the differential feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the waterproof performance of the rain boot to be detected meets a predetermined standard.
More specifically, in the embodiment of the present application, the detection signal collecting unit 110 is configured to obtain a first sound signal of an un-soaked rainshoe to be detected and a second sound signal of the soaked and wiped rainshoe to be detected. If the rain shoes are relatively poor in quality, after the rain shoes are soaked in the water tank, even if water stains on the surfaces of the rain shoes are subsequently wiped dry, part of water molecules still penetrate into the rain shoes to cause the change of acoustic characteristics of the rain shoes, therefore, whether the waterproof performance of the rain shoes meets requirements can be judged through the acoustic characteristics of the soaked and wiped rain shoes, namely, a waterproof performance detection scheme is constructed through difference characteristic comparison between the acoustic characteristics of the soaked rain shoes and the acoustic characteristics of the un-soaked rain shoes, and the condition that the rain shoes are soaked for a long time to observe the state in the shoes in the traditional waterproof detection is avoided.
More specifically, in this embodiment, the time domain converting unit 120 is configured to calculate a time domain enhancement map of the first sound signal and the second sound signal to obtain a first time domain enhancement map and a second time domain enhancement map. Further considering that when the waterproof performance of the rainshoes is detected through the acoustic features of the rainshoes, in the acquisition process of the sound signals, the sound signals of the rainshoes to be detected are weak due to interference of other factors such as external environment noise, and the like, time domain enhancement needs to be further performed on the sound signals, so that the differences can be more obviously obtained in subsequent feature mining and comparison.
More specifically, in this embodiment of the present application, the twin encoding unit 130 is configured to pass the first time-domain enhancement map and the second time-domain enhancement map through a twin network model including a first convolutional neural network and a second convolutional neural network to obtain a first feature map and a second feature map, where the first convolutional neural network and the second convolutional neural network have the same network structure. The convolutional neural network model has excellent performance in feature mining of images, and therefore, in the technical scheme of the application, deep feature mining is performed on the first time domain enhancement map and the second time domain enhancement map respectively by using the convolutional neural network model with the same network structure to obtain deep features of the first time domain enhancement map and the second time domain enhancement map.
Accordingly, as shown in fig. 3, in a specific example, the twin encoding unit 130 includes: a first convolutional encoding subunit 131, configured to perform convolutional processing, pooling processing, and nonlinear activation processing on input data in forward pass of a layer using each layer of the first convolutional neural network, respectively, so as to output the first feature map from a last layer of the first convolutional neural network; and a second convolutional encoding subunit 132, configured to perform convolutional processing, pooling processing, and nonlinear activation processing on the input data in forward pass of the layer using each layer of the second convolutional neural network, respectively, to output the second feature map by a last layer of the second convolutional neural network.
It should be understood that, since convolution and pooling in a standard deep convolutional neural network are common downsampling operations, downsampling also reduces the feature map scale while enlarging the perceptual domain, resulting in information loss. And because the standard deep convolution neural network model uses a convolution kernel with a fixed size, when the features are extracted, multi-scale feature information cannot be learned, and the problem of local information loss caused by a grid effect is caused. In order to solve the above problem, in the technical solution of the present application, a multi-branch perceptual domain module is used to perform feature enhancement on the first feature map and the second feature map obtained by using a conventional convolutional neural network model as a feature extractor, so as to obtain a first multi-scale perceptual feature map and a second multi-scale perceptual feature map.
More specifically, in this embodiment of the present application, the multi-scale sensing unit 140 is configured to pass the first feature map and the second feature map through a multi-branch sensing domain module to obtain a first multi-scale sensing feature map and a second multi-scale sensing feature map, respectively. Compared with the traditional convolutional neural network model, the multi-branch perceptual domain module has the following advantages: 1) The multi-branch sensing domain module utilizes the hole convolution to replace the traditional convolution kernel, and utilizes the specific parameter expansion rate (di l ated rates) thereof to enable the original convolution kernel to have a larger sensing domain under the same parameter number, namely, the multi-branch sensing domain module can expand the sensing domain by utilizing the hole convolution, so that downsampling is not needed, information loss is avoided, and the input scale and the output scale of the characteristic diagram are consistent; 2) The multi-branch sensing domain module is designed with parallel hole convolution structures with different expansion rates, so that a network can learn multi-scale characteristic information, and the problem of local information loss caused by a grid effect is solved. The structure increases the amount of small target information available for target detection, and further solves the problem that the small target information cannot be reconstructed due to the fact that the traditional convolutional neural network utilizes the pooling layer.
Accordingly, as shown in fig. 4, in a specific example, the multi-scale sensing unit 140 includes: a first point convolution subunit 141, configured to input the first feature map and the second feature map into the first point convolution layer of the multi-branch perceptual domain module, respectively, to obtain a first convolution feature map and a second convolution feature map; a multi-branch sensing subunit 142, configured to pass the first convolution feature map and the second convolution feature map through a first branch sensing domain unit, a second branch sensing domain unit, and a third branch sensing domain unit of the multi-branch sensing domain module, respectively, to obtain a first branch sensing feature map, a second branch sensing feature map, and a third branch sensing feature map, and a fourth branch sensing feature map, a fifth branch sensing feature map, and a sixth branch sensing feature map, where the first branch sensing domain unit, the second branch sensing domain unit, and the third branch sensing domain unit have a parallel structure; a merging subunit 143, configured to cascade the first branch perceptual feature map, the second branch perceptual feature map, and the third branch perceptual feature map to obtain a first merged perceptual feature map, and cascade the fourth branch perceptual feature map, the fifth branch perceptual feature map, and the sixth branch perceptual feature map to obtain a second merged perceptual feature map; a second point convolution subunit 144, configured to input the first fused perceptual feature map and the second fused perceptual feature map to a second point convolution layer of the multi-branch perceptual domain module, respectively, so as to obtain a first channel corrected fused perceptual feature map and a second channel corrected fused perceptual feature map; and a residual concatenation subunit 145, configured to calculate position-wise points of the first channel corrected fused perceptual feature map and the first convolution feature map to obtain the first multi-scale perceptual feature map, and calculate position-wise points of the second channel corrected fused perceptual feature map and the second convolution feature map to obtain the second multi-scale perceptual feature map.
Accordingly, as shown in fig. 5, in a specific example, the multi-branch sensing subunit 142 includes: a first one-dimensional convolution coding secondary subunit 14201, configured to pass the first convolution feature map through the first one-dimensional convolution layer of the first branch perceptual domain unit to obtain a first one-dimensional convolution feature map; a first hole convolutional coding secondary subunit 14202, configured to pass the first one-dimensional convolutional feature map through a first two-dimensional convolutional layer with a first hole rate to obtain the first branch perceptual feature map; a second one-dimensional convolutional coding secondary subunit 14203, configured to pass the first convolutional feature map through a second one-dimensional convolutional layer of the second branch perceptual domain unit to obtain a second one-dimensional convolutional feature map; a second hole convolution coding secondary subunit 14204, configured to pass the second one-dimensional convolution feature map through a second two-dimensional convolution layer with a second hole rate to obtain the second branch perceptual feature map; a third one-dimensional convolution coding secondary subunit 14205, configured to pass the first convolution feature map through a third one-dimensional convolution layer of the third branch perceptual domain unit to obtain a third one-dimensional convolution feature map; a third hole convolution coding secondary subunit 14206, configured to pass the third one-dimensional convolution feature map through a third two-dimensional convolution layer with a third hole rate to obtain the third branch sensing feature map; a fourth one-dimensional convolutional coding secondary subunit 14207, configured to pass the second convolutional feature map through a fourth one-dimensional convolutional layer of the fourth branch perceptual domain unit to obtain a fourth one-dimensional convolutional feature map; a fourth hole convolution coding secondary subunit 14208, configured to pass the fourth one-dimensional convolution feature map through a fourth two-dimensional convolution layer with a fourth hole rate to obtain the fourth branch perceptual feature map; a fifth-dimensional convolution coding secondary subunit 14209, configured to pass the second convolution feature map through a fifth-dimensional convolution layer of the fifth branch perceptual domain unit to obtain a fifth-dimensional convolution feature map; a fifth hole convolutional coding secondary subunit 14210, configured to pass the fifth one-dimensional convolutional feature map through a fifth two-dimensional convolutional layer with a fifth hole rate to obtain the fifth branch perceptual feature map; a sixth one-dimensional convolution coding secondary sub-unit 14211, configured to pass the second convolution feature map through a sixth one-dimensional convolution layer of the sixth branch perceptual domain unit to obtain a sixth one-dimensional convolution feature map; and a sixth hole convolution coding secondary subunit 14212, configured to pass the sixth one-dimensional convolution feature map through a sixth two-dimensional convolution layer with a sixth hole rate to obtain the sixth branch perceptual feature map.
Accordingly, in a specific example, the first, second, and third voidage are mutually unequal, and the third, fourth, and fifth voidage are mutually unequal.
More specifically, in this embodiment of the present application, the difference evaluation unit 150 is configured to calculate a difference feature map between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map. Classifying through difference characteristics between acoustic characteristics of the rainshoes to be detected which are not soaked and the rainshoes to be detected which are soaked and wiped dry, so as to evaluate the waterproof performance of the rainshoes to be detected. In a specific example, a difference by position between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map may be calculated to obtain a difference feature map.
Accordingly, in a specific example, the difference evaluation unit 150 is further configured to: calculating a difference feature map between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map using the following formula; wherein the formula is:
Figure BDA0003882945250000111
wherein, F d Representing said difference profile, F 1 Representing said first multi-scale perceptual feature map, F 2 Representing the second multi-scale perceptual feature map,
Figure BDA0003882945250000112
representing subtraction by position.
More specifically, in this embodiment, the waterproof monitoring result generating unit 160 is configured to pass the differential feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the waterproof performance of the rain boot to be detected meets a predetermined standard.
Accordingly, in a specific example, the waterproof monitoring result generating unit 160 is further configured to: processing the differential feature map using the classifier in the following formula to generate a classification result; wherein the formula is: softmax { (M) c ,B c ) Project (F), where Project (F) represents projecting the difference feature map as a vector, M c Weight matrix being a fully connected layer, B c Representing the deflection vector of the fully connected layer.
Accordingly, in a specific example, the shoe making machine for multi-angle waterproof monitoring of rain shoe production further comprises a training module for training the twin network, the multi-branch perception domain module and the classifier; as shown in fig. 6, the training module 200 includes: the training detection signal acquisition unit 210 is configured to acquire training data, where the training data includes a first training sound signal of an un-soaked rainshoe to be detected, a second training sound signal of the soaked and wiped rainshoe to be detected, and whether the waterproof performance of the rainshoe to be detected meets a true value of a predetermined standard; a training time domain converting unit 220, configured to calculate time domain enhancement maps of the first training audio signal and the second training audio signal to obtain a first training time domain enhancement map and a second training time domain enhancement map; a training twinning coding unit 230, configured to pass the first training time-domain enhancement map and the second training time-domain enhancement map through the twinning network model including a first convolutional neural network and a second convolutional neural network to obtain a first training feature map and a second training feature map, where the first convolutional neural network and the second convolutional neural network have the same network structure; a training multi-scale perception unit 240, configured to pass the first training feature map and the second training feature map through the multi-branch perception domain module to obtain a first training multi-scale perception feature map and a second training multi-scale perception feature map; a training difference evaluation unit 250, configured to calculate a training difference feature map between the first training multi-scale perceptual feature map and the second training multi-scale perceptual feature map; a classification loss function value calculating unit 260, configured to pass the training difference feature map through the classifier to obtain a classification loss function value; a classification mode digestion inhibition loss function value calculation unit 270, configured to calculate a classification mode digestion inhibition loss function value of the first training multi-scale perceptual feature map and the second training multi-scale perceptual feature map; and a training unit 280 for training the twin network, the multi-branch perceptual domain module and the classifier with the weighted sum of the classification mode digestion inhibition loss function values and the classification loss function values as loss function values.
Particularly, in the technical solution of the present application, since the differential feature map as the classification feature map is obtained by calculating a difference between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map, when a model is trained, and a gradient of a loss function is calculated and reversely propagated from a classifier to the model, the gradient may pass through a convolutional neural network and a multi-branch perceptual domain module respectively obtaining the first multi-scale perceptual feature map and the second multi-scale perceptual feature map, and at this time, a resolution of a feature pattern extracted by a cascade branch of the corresponding convolutional neural network and the multi-branch perceptual domain module may be caused by an abnormal gradient branch. Thus, in addition to the classification loss function, a classification pattern resolution inhibition loss function is further introduced to solve the resolution of the extracted feature pattern.
Accordingly, in a specific example, the classification mode resolution suppression loss function value calculating unit 270 is further configured to: wherein the formula is:
Figure BDA0003882945250000121
wherein V 1 And V 2 Respectively, the first training multi-scale perception characteristic diagram and the second training multi-scale perception characteristic diagram are spread to obtain characteristic vectors, and M is 1 And M 2 The classifier respectively performs the classification on the first training multi-scale perception feature map and the second training multi-scale perception featureThe weight matrix of the feature vectors obtained after the graph expansion,
Figure BDA0003882945250000131
represents the square of the two-norm of the vector, | · | F The F-norm of the matrix is represented,
Figure BDA0003882945250000132
expressing subtraction by position, exp (-) denotes an exponential operation of a vector, which denotes calculation of a natural exponent function value raised to the power of the eigenvalue of each position in the vector, and an exponential operation of a matrix, which denotes calculation of a natural exponent function value raised to the power of the eigenvalue of each position in the matrix.
That is, by introducing a classification pattern digestion inhibition loss function, the pseudo difference of the classifier weight is pushed to the real feature distribution difference of the first multi-scale perception feature map and the second multi-scale perception feature map, that is, the feature distribution of the differential feature map, so as to ensure that the directional derivative is regularized near a gradient branch point when the gradient is reversely propagated, that is, the gradient is subjected to over-weighting between the corresponding convolutional neural network and the cascade branches of the multi-branch perception domain module, so that the classification pattern digestion of the feature is inhibited, so as to improve the extraction capability of the classification features of the corresponding convolutional neural network and the cascade branches of the multi-branch perception domain module, and accordingly improve the accuracy of the classification result of the differential feature map. Like this, can come to through the acoustics characteristic that detects the rain shoes to the waterproof performance of rain shoes assesses and detects to avoid traditional process because the problem of the check-out time overlength that the long-time soaking brought, and can also improve to the precision that rain shoes waterproof performance detected.
In summary, a shoe making machine 100 for multi-angle waterproof monitoring of rain shoe production based on an embodiment of the present application is illustrated, which first calculates a first sound signal of an un-soaked rain shoe to be detected and a time-domain enhancement map of a second sound signal of the soaked and wiped rain shoe to be detected to obtain a first time-domain enhancement map and a second time-domain enhancement map, then passes the first time-domain enhancement map and the second time-domain enhancement map through a twin network model to obtain a first feature map and a second feature map, then calculates a differential feature map between a first multi-scale perception feature map and a second multi-scale perception feature map obtained by passing the first feature map and the second feature map through a multi-branch perception domain module, and finally passes the differential feature map through a classifier to obtain a classification result for indicating whether waterproof performance of the rain shoe to be detected meets a predetermined standard. Through such a mode, can shorten check-out time, improve the precision that rain shoes waterproof performance detected.
As described above, the shoe maker 100 for multi-angle waterproof monitoring for rain shoe production according to the embodiment of the present application may be implemented in various terminal devices, such as a server having a multi-angle waterproof monitoring algorithm for rain shoe production, and the like. In one example, a shoe making machine 100 for multi-angle waterproof monitoring of rain shoe production may be integrated into a terminal device as one software module and/or hardware module. For example, the multi-angle waterproof monitoring shoe making machine 100 for rain shoe production may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the shoe-making machine 100 for multi-angle waterproof monitoring of rain shoe production can also be one of many hardware modules of the terminal device.
Alternatively, in another example, the shoe maker 100 for multi-angle waterproof monitoring of rain shoe production and the terminal device may also be separate devices, and the shoe maker 100 for multi-angle waterproof monitoring of rain shoe production may be connected to the terminal device through a wired and/or wireless network and transmit interactive information in an agreed data format.
Exemplary method
FIG. 7 illustrates a flow chart of a multi-angle waterproofing monitoring method for rain shoe production according to an embodiment of the present application. As shown in fig. 7, the multi-angle waterproof monitoring method for rain shoe production according to the embodiment of the application includes: s110, acquiring a first sound signal of the rainshoes to be detected which are not soaked and a second sound signal of the rainshoes to be detected which are soaked and wiped dry; s120, calculating time domain enhancement maps of the first sound signal and the second sound signal to obtain a first time domain enhancement map and a second time domain enhancement map; s130, enabling the first time domain enhancement map and the second time domain enhancement map to pass through a twin network model comprising a first convolutional neural network and a second convolutional neural network to obtain a first characteristic map and a second characteristic map, wherein the first convolutional neural network and the second convolutional neural network have the same network structure; s140, respectively enabling the first feature map and the second feature map to pass through a multi-branch perception domain module to obtain a first multi-scale perception feature map and a second multi-scale perception feature map; s150, calculating a difference feature map between the first multi-scale perception feature map and the second multi-scale perception feature map; and S160, the differential characteristic diagram is processed by a classifier to obtain a classification result, and the classification result is used for indicating whether the waterproof performance of the rain shoes to be detected meets the preset standard or not.
Fig. 8 illustrates a schematic diagram of a system architecture for a multi-angle waterproofing monitoring method for rain shoe production according to an embodiment of the present application. As shown in fig. 8, in the system architecture of the multi-angle waterproof monitoring method for rain shoe production, first, a first sound signal of an un-soaked rain shoe to be detected and a second sound signal of the soaked and wiped rain shoe to be detected are obtained; then, calculating time domain enhancement maps of the first sound signal and the second sound signal to obtain a first time domain enhancement map and a second time domain enhancement map; then, the first time domain enhancement map and the second time domain enhancement map are passed through a twin network model comprising a first convolutional neural network and a second convolutional neural network to obtain a first characteristic map and a second characteristic map, wherein the first convolutional neural network and the second convolutional neural network have the same network structure; then, respectively passing the first feature map and the second feature map through a multi-branch perception domain module to obtain a first multi-scale perception feature map and a second multi-scale perception feature map; then, calculating a difference feature map between the first multi-scale perception feature map and the second multi-scale perception feature map; and finally, the differential characteristic diagram is processed by a classifier to obtain a classification result, and the classification result is used for indicating whether the waterproof performance of the rain boot to be detected meets a preset standard or not.
In a specific example, in the above multi-angle waterproof monitoring method for rain shoe production, the passing the first time-domain enhancement map and the second time-domain enhancement map through a twin network model including a first convolutional neural network and a second convolutional neural network to obtain a first feature map and a second feature map includes: performing convolution processing, pooling processing and nonlinear activation processing on input data in forward pass of layers respectively using layers of the first convolutional neural network to output the first feature map by a last layer of the first convolutional neural network; and performing convolution processing, pooling processing and nonlinear activation processing on the input data in forward pass of the layers respectively using the layers of the second convolutional neural network to output the second feature map by the last layer of the second convolutional neural network.
In a specific example, in the above multi-angle waterproof monitoring method for rain boot production, the passing the first feature map and the second feature map through a multi-branch perceptual domain module to obtain a first multi-scale perceptual feature map and a second multi-scale perceptual feature map respectively includes: inputting the first feature map and the second feature map into a first point convolution layer of the multi-branch perceptual domain module respectively to obtain a first convolution feature map and a second convolution feature map; respectively enabling the first convolution feature map and the second convolution feature map to pass through a first branch sensing domain unit, a second branch sensing domain unit and a third branch sensing domain unit of the multi-branch sensing domain module to obtain a first branch sensing feature map, a second branch sensing feature map and a third branch sensing feature map, as well as a fourth branch sensing feature map, a fifth branch sensing feature map and a sixth branch sensing feature map, wherein the first branch sensing domain unit, the second branch sensing domain unit and the third branch sensing domain unit have parallel structures; cascading the first branch perception feature map, the second branch perception feature map and the third branch perception feature map to obtain a first fusion perception feature map, and cascading the fourth branch perception feature map, the fifth branch perception feature map and the sixth branch perception feature map to obtain a second fusion perception feature map; inputting the first fused sensing feature map and the second fused sensing feature map into a second point convolution layer of the multi-branch sensing domain module respectively to obtain a first channel corrected fused sensing feature map and a second channel corrected fused sensing feature map; and calculating position-based points of the first channel correction fusion perception feature map and the first convolution feature map to obtain the first multi-scale perception feature map, and calculating position-based points of the second channel correction fusion perception feature map and the second convolution feature map to obtain the second multi-scale perception feature map.
In a specific example, in the above multi-angle waterproof monitoring method for rain boot production, the passing the first convolution feature map and the second convolution feature map through the first branch sensing domain unit, the second branch sensing domain unit and the third branch sensing domain unit of the multi-branch sensing domain module respectively to obtain a first branch sensing feature map, a second branch sensing feature map and a third branch sensing feature map, and a fourth branch sensing feature map, a fifth branch sensing feature map and a sixth branch sensing feature map includes: passing the first convolution feature map through a first one-dimensional convolution layer of the first branch sensing domain unit to obtain a first one-dimensional convolution feature map; passing the first one-dimensional convolution feature map through a first two-dimensional convolution layer with a first void rate to obtain the first branch perceptual feature map; passing the first convolution feature map through a second one-dimensional convolution layer of the second branch sensing domain unit to obtain a second one-dimensional convolution feature map; passing the second one-dimensional convolution feature map through a second two-dimensional convolution layer with a second void rate to obtain a second branch perception feature map; enabling the first convolution feature map to pass through a third one-dimensional convolution layer of the third branch sensing domain unit to obtain a third one-dimensional convolution feature map; passing the third one-dimensional convolution feature map through a third two-dimensional convolution layer with a third void rate to obtain a third branch perception feature map; enabling the second convolution characteristic diagram to pass through a fourth one-dimensional convolution layer of the fourth branch sensing domain unit to obtain a fourth one-dimensional convolution characteristic diagram; passing the fourth one-dimensional convolution feature map through a fourth two-dimensional convolution layer with a fourth void rate to obtain a fourth branch perception feature map; enabling the second convolution characteristic diagram to pass through a fifth one-dimensional convolution layer of the fifth branch sensing domain unit to obtain a fifth one-dimensional convolution characteristic diagram; passing the fifth one-dimensional convolution feature map through a fifth two-dimensional convolution layer with a fifth void rate to obtain the fifth branch perception feature map; passing the second convolution feature map through a sixth one-dimensional convolution layer of the sixth branch sensing domain unit to obtain a sixth one-dimensional convolution feature map; and passing the sixth one-dimensional convolution feature map through a sixth two-dimensional convolution layer with a sixth void rate to obtain the sixth branch perception feature map.
In a specific example, in the multi-angle waterproof monitoring method for rain shoe production, the first voidage, the second voidage, and the third voidage are not equal to each other, and the third voidage, the fourth voidage, and the fifth voidage are not equal to each other.
In a specific example, in the multi-angle waterproof monitoring method for rain boot production described above, the calculating a differential feature map between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map further includes: calculating a difference feature map between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map using the following formula; wherein the formula is:
Figure BDA0003882945250000171
wherein, F d Representing said difference profile, F 1 Representing said first multi-scale perceptual feature map, F 2 Representing the second multi-scale perceptual feature map,
Figure BDA0003882945250000172
representing subtraction by position.
In a specific example, in the multi-angle waterproof monitoring method for rain shoe production, the passing the differential feature map through a classifier to obtain a classification result further includes: processing the differential feature map using the classifier in the following formula to generate a classification result; wherein the formula is: softmax { (M) c ,B c ) L Project (F) }, where Project (F) denotes the projection of the difference feature map as a vector, M c Weight matrix being a fully connected layer, B c Representing the deflection vector of the fully connected layer.
In a specific example, in the above multi-angle waterproof monitoring method for rain boot production, the multi-angle waterproof monitoring method for rain boot production further includes: training the twin network, the multi-branch perceptual domain module, and the classifier; wherein the training the twin network, the multi-branch perceptual domain module, and the classifier comprises: acquiring training data, wherein the training data comprises a first training sound signal of an un-soaked rainshoe to be detected, a second training sound signal of the soaked and wiped rainshoe to be detected, and whether the waterproof performance of the rainshoe to be detected meets a real value of a preset standard or not; calculating time domain enhancement maps of the first training sound signal and the second training sound signal to obtain a first training time domain enhancement map and a second training time domain enhancement map; passing the first training time domain enhancement map and the second training time domain enhancement map through the twin network model comprising a first convolutional neural network and a second convolutional neural network to obtain a first training feature map and a second training feature map, wherein the first convolutional neural network and the second convolutional neural network have the same network structure; respectively enabling the first training feature map and the second training feature map to pass through the multi-branch perception domain module to obtain a first training multi-scale perception feature map and a second training multi-scale perception feature map; calculating a training difference feature map between the first training multi-scale perception feature map and the second training multi-scale perception feature map; passing the training difference feature map through the classifier to obtain a classification loss function value; calculating a classification mode digestion inhibition loss function value of the first training multi-scale perception feature map and the second training multi-scale perception feature map; and training the twin network, the multi-branch perceptual domain module, and the classifier with a weighted sum of the classification pattern digestion inhibition loss function values and the classification loss function values as loss function values.
In a specific example, in the above multi-angle waterproof monitoring method for rain shoe production, the calculating a classification mode digestion inhibition loss function value of the first training multi-scale perceptual feature map and the second training multi-scale perceptual feature map further includes: wherein the formula is:
Figure BDA0003882945250000181
wherein V 1 And V 2 Respectively, the first training multi-scale perception characteristic diagram and the second training multi-scale perception characteristic diagram are spread to obtain characteristic vectors, and M is 1 And M 2 Respectively are the weight matrixes of the classifier for the feature vectors obtained after the first training multi-scale perception feature map and the second training multi-scale perception feature map are developed,
Figure BDA0003882945250000182
represents the square of the two-norm of the vector, | - | F The F-norm of the matrix is represented,
Figure BDA0003882945250000183
expressing subtraction by position, exp (·) denotes an exponential operation of a vector, which denotes calculation of a natural exponent function value raised to the eigenvalue of each position in the vector, and an exponential operation of a matrix, which denotes calculation of a natural exponent function value raised to the eigenvalue of each position in the matrix.
Here, it can be understood by those skilled in the art that the detailed operations of the respective steps in the above-described multi-angle waterproof monitoring method for rain shoe production have been described in detail in the above description of the shoe maker for multi-angle waterproof monitoring for rain shoe production with reference to fig. 1 to 6, and thus, a repetitive description thereof will be omitted.

Claims (10)

1. A shoe making machine that is used for waterproof monitoring of multi-angle of rain boots production, its characterized in that includes:
the detection signal acquisition unit is used for acquiring a first sound signal of the un-soaked rainshoes to be detected and a second sound signal of the soaked and wiped rainshoes to be detected;
a time domain conversion unit, configured to calculate a time domain enhancement map of the first sound signal and the second sound signal to obtain a first time domain enhancement map and a second time domain enhancement map;
a twin coding unit, configured to pass the first time-domain enhancement map and the second time-domain enhancement map through a twin network model including a first convolutional neural network and a second convolutional neural network to obtain a first feature map and a second feature map, where the first convolutional neural network and the second convolutional neural network have the same network structure;
the multi-scale perception unit is used for respectively enabling the first feature map and the second feature map to pass through a multi-branch perception domain module to obtain a first multi-scale perception feature map and a second multi-scale perception feature map;
a difference evaluation unit, configured to calculate a difference feature map between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map; and
and the waterproof monitoring result generating unit is used for enabling the differential characteristic diagram to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the waterproof performance of the rain boot to be detected meets a preset standard or not.
2. The shoe making machine for multi-angle waterproof monitoring of rain shoe production according to claim 1, wherein said twinning coding unit comprises:
a first convolutional coding subunit, configured to perform convolutional processing, pooling processing, and nonlinear activation processing on input data in forward pass of a layer using each layer of the first convolutional neural network, respectively, so as to output the first feature map by a last layer of the first convolutional neural network; and
a second convolutional encoding subunit, configured to perform convolutional processing, pooling processing, and nonlinear activation processing on the input data in forward pass of the layer using each layer of the second convolutional neural network, respectively, to output the second feature map by a last layer of the second convolutional neural network.
3. The shoe making machine for multi-angle waterproof monitoring of rain shoe production according to claim 2, wherein said multi-scale sensing unit comprises:
the first point convolution subunit is used for respectively inputting the first feature map and the second feature map into a first point convolution layer of the multi-branch perception domain module so as to obtain a first convolution feature map and a second convolution feature map;
a multi-branch sensing subunit, configured to pass the first convolution feature map and the second convolution feature map through a first branch sensing domain unit, a second branch sensing domain unit, and a third branch sensing domain unit of the multi-branch sensing domain module, respectively, to obtain a first branch sensing feature map, a second branch sensing feature map, and a third branch sensing feature map, and a fourth branch sensing feature map, a fifth branch sensing feature map, and a sixth branch sensing feature map, where the first branch sensing domain unit, the second branch sensing domain unit, and the third branch sensing domain unit have a parallel structure;
a fusion subunit, configured to cascade the first branch perceptual feature map, the second branch perceptual feature map, and the third branch perceptual feature map to obtain a first fusion perceptual feature map, and cascade the fourth branch perceptual feature map, the fifth branch perceptual feature map, and the sixth branch perceptual feature map to obtain a second fusion perceptual feature map;
the second point convolution subunit is used for respectively inputting the first fused perception feature map and the second fused perception feature map into a second point convolution layer of the multi-branch perception domain module so as to obtain a first channel correction fused perception feature map and a second channel correction fused perception feature map; and
and the residual error cascade subunit is used for calculating position-based points of the first channel correction fusion perception feature map and the first convolution feature map to obtain the first multi-scale perception feature map, and calculating position-based points of the second channel correction fusion perception feature map and the second convolution feature map to obtain the second multi-scale perception feature map.
4. The shoe making machine for multi-angle waterproof monitoring of rain shoe production according to claim 3, wherein said multi-branch sensing subunit comprises:
the first one-dimensional convolution coding secondary subunit is used for enabling the first convolution feature map to pass through a first one-dimensional convolution layer of the first branch sensing domain unit so as to obtain a first one-dimensional convolution feature map;
a first hole convolution coding secondary subunit, configured to pass the first one-dimensional convolution feature map through a first two-dimensional convolution layer with a first hole rate to obtain the first branch perceptual feature map;
the second one-dimensional convolution coding secondary subunit is used for enabling the first convolution feature map to pass through a second one-dimensional convolution layer of the second branch sensing domain unit so as to obtain a second one-dimensional convolution feature map;
the second hole convolution coding secondary subunit is used for enabling the second one-dimensional convolution feature map to pass through a second two-dimensional convolution layer with a second hole rate so as to obtain a second branch perception feature map;
a third one-dimensional convolution coding secondary subunit, configured to pass the first convolution feature map through a third one-dimensional convolution layer of the third branch perceptual domain unit to obtain a third one-dimensional convolution feature map;
a third hole convolution coding secondary subunit, configured to pass the third one-dimensional convolution feature map through a third two-dimensional convolution layer with a third hole rate to obtain a third branch perception feature map;
the fourth one-dimensional convolution coding secondary subunit is used for enabling the second convolution feature map to pass through a fourth one-dimensional convolution layer of the fourth branch perception domain unit so as to obtain a fourth one-dimensional convolution feature map;
a fourth hole convolution coding secondary subunit, configured to pass the fourth one-dimensional convolution feature map through a fourth two-dimensional convolution layer with a fourth hole rate to obtain the fourth branch perceptual feature map;
a fifth one-dimensional convolution coding secondary subunit, configured to pass the second convolution feature map through a fifth one-dimensional convolution layer of the fifth branch perceptual domain unit to obtain a fifth one-dimensional convolution feature map;
a fifth hole convolution coding secondary subunit, configured to pass the fifth one-dimensional convolution feature map through a fifth two-dimensional convolution layer with a fifth hole rate to obtain the fifth branch perceptual feature map;
a sixth one-dimensional convolution coding secondary subunit, configured to pass the second convolution feature map through a sixth one-dimensional convolution layer of the sixth branch perceptual domain unit to obtain a sixth one-dimensional convolution feature map; and
and the sixth hole convolution coding secondary subunit is used for enabling the sixth one-dimensional convolution characteristic map to pass through a sixth two-dimensional convolution layer with a sixth hole rate so as to obtain the sixth branch perception characteristic map.
5. The shoe making machine for multi-angle waterproof monitoring of rain shoe production according to claim 4, wherein said first voidage, said second voidage and said third voidage are mutually unequal, and said third voidage, said fourth voidage and said fifth voidage are mutually unequal.
6. The multi-angle waterproof monitoring shoe making machine for rain shoe production as claimed in claim 5, wherein said difference evaluation unit is further configured to:
calculating a difference feature map between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map using the following formula;
wherein the formula is:
Figure FDA0003882945240000031
wherein, F d Representing said difference profile, F 1 Representing said first multi-scale perceptual feature map, F 2 Representing the second multi-scale perceptual feature map,
Figure FDA0003882945240000032
representing subtraction by position.
7. The shoe making machine for multi-angle waterproof monitoring of rain shoe production of claim 6, wherein said waterproof monitoring result generating unit is further configured to:
processing the differential feature map using the classifier in the following formula to generate a classification result; wherein the formula is: softmax { (M) c ,B c ) Project (F), where Project (F) represents projecting the difference feature map as a vector, M c Weight matrix being a fully connected layer, B c Representing the deflection vector of the fully connected layer.
8. The shoe making machine for multi-angle waterproof monitoring of rain shoe production of claim 1, further comprising a training module for training said twin network, said multi-branch perceptual domain module and said classifier;
wherein, the training module includes:
the training detection signal acquisition unit is used for acquiring training data, wherein the training data comprises a first training sound signal of the rainshoes to be detected which are not soaked, a second training sound signal of the rainshoes to be detected which are soaked and wiped dry, and whether the waterproof performance of the rainshoes to be detected meets a real value of a preset standard or not;
a training time domain converting unit, configured to calculate time domain enhancement maps of the first training sound signal and the second training sound signal to obtain a first training time domain enhancement map and a second training time domain enhancement map;
a training twinning coding unit, configured to pass the first training time-domain enhancement map and the second training time-domain enhancement map through the twinning network model including a first convolutional neural network and a second convolutional neural network to obtain a first training feature map and a second training feature map, where the first convolutional neural network and the second convolutional neural network have the same network structure;
the training multi-scale perception unit is used for respectively enabling the first training feature map and the second training feature map to pass through the multi-branch perception domain module to obtain a first training multi-scale perception feature map and a second training multi-scale perception feature map;
the training difference evaluation unit is used for calculating a training difference feature map between the first training multi-scale perception feature map and the second training multi-scale perception feature map;
the classification loss function value calculation unit is used for enabling the training differential feature map to pass through the classifier to obtain a classification loss function value;
the classification mode digestion inhibition loss function value calculation unit is used for calculating classification mode digestion inhibition loss function values of the first training multi-scale perception feature map and the second training multi-scale perception feature map; and
a training unit to train the twin network, the multi-branch perceptual domain module, and the classifier with a weighted sum of the classification mode resolution rejection loss function values and the classification loss function values as loss function values.
9. The shoe making machine for multi-angle waterproof monitoring of rain shoe production of claim 8, wherein said classification model digestion inhibition loss function value calculation unit is further configured to:
wherein the formula is:
Figure FDA0003882945240000051
wherein V 1 And V 2 Respectively obtaining a feature vector after the first training multi-scale perception feature map and the second training multi-scale perception feature map are unfolded, and M 1 And M 2 Respectively are the weight matrixes of the classifier for the feature vectors obtained after the first training multi-scale perception feature map and the second training multi-scale perception feature map are developed,
Figure FDA0003882945240000052
represents the square of the two-norm of the vector, | - | F The F-norm of the matrix is represented,
Figure FDA0003882945240000053
expressing subtraction by position, exp (·) denotes an exponential operation of a vector, which denotes calculation of a natural exponent function value raised to the eigenvalue of each position in the vector, and an exponential operation of a matrix, which denotes calculation of a natural exponent function value raised to the eigenvalue of each position in the matrix.
10. A multi-angle waterproof monitoring method for rain boot production is characterized by comprising the following steps:
acquiring a first sound signal of a rainshoe to be detected which is not soaked and a second sound signal of the rainshoe to be detected which is soaked and wiped dry;
calculating time domain enhancement maps of the first sound signal and the second sound signal to obtain a first time domain enhancement map and a second time domain enhancement map;
passing the first time domain enhancement map and the second time domain enhancement map through a twin network model comprising a first convolutional neural network and a second convolutional neural network to obtain a first feature map and a second feature map, wherein the first convolutional neural network and the second convolutional neural network have the same network structure;
respectively enabling the first feature map and the second feature map to pass through a multi-branch perception domain module to obtain a first multi-scale perception feature map and a second multi-scale perception feature map;
calculating a differential feature map between the first multi-scale perceptual feature map and the second multi-scale perceptual feature map; and
and passing the differential characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the waterproof performance of the rain boot to be detected meets a preset standard or not.
CN202211236175.3A 2022-10-10 2022-10-10 Multi-angle waterproof monitoring shoe making machine and method for rain shoe production Pending CN115471498A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211236175.3A CN115471498A (en) 2022-10-10 2022-10-10 Multi-angle waterproof monitoring shoe making machine and method for rain shoe production

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211236175.3A CN115471498A (en) 2022-10-10 2022-10-10 Multi-angle waterproof monitoring shoe making machine and method for rain shoe production

Publications (1)

Publication Number Publication Date
CN115471498A true CN115471498A (en) 2022-12-13

Family

ID=84337716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211236175.3A Pending CN115471498A (en) 2022-10-10 2022-10-10 Multi-angle waterproof monitoring shoe making machine and method for rain shoe production

Country Status (1)

Country Link
CN (1) CN115471498A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116858943A (en) * 2023-02-03 2023-10-10 台州五标机械股份有限公司 Hollow shaft intelligent preparation method and system for new energy automobile

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116858943A (en) * 2023-02-03 2023-10-10 台州五标机械股份有限公司 Hollow shaft intelligent preparation method and system for new energy automobile

Similar Documents

Publication Publication Date Title
CN113269237B (en) Assembly change detection method, device and medium based on attention mechanism
CN110677639B (en) Non-reference video quality evaluation method based on feature fusion and recurrent neural network
CN109887050B (en) Coded aperture spectral imaging method based on adaptive dictionary learning
CN115311186B (en) Cross-scale attention confrontation fusion method and terminal for infrared and visible light images
CN105761251A (en) Separation method of foreground and background of video based on low rank and structure sparseness
CN115661090A (en) Intelligent processing technology and system for textile fabric
CN111126494B (en) Image classification method and system based on anisotropic convolution
CN104036296B (en) A kind of expression of image and processing method and processing device
CN118129799B (en) Mapping precision analysis method and system based on three-dimensional laser scanning
CN110009745B (en) Method for extracting plane from point cloud according to plane element and model drive
CN115471498A (en) Multi-angle waterproof monitoring shoe making machine and method for rain shoe production
CN113327231A (en) Hyperspectral abnormal target detection method and system based on space-spectrum combination
CN114972882A (en) Wear surface damage depth estimation method and system based on multi-attention machine system
CN117635585A (en) Texture surface defect detection method based on teacher-student network
CN112767267A (en) Image defogging method based on simulation polarization fog-carrying scene data set
CN111046893A (en) Image similarity determining method and device, and image processing method and device
JP6950647B2 (en) Data determination device, method, and program
CN115327544B (en) Little-sample space target ISAR defocus compensation method based on self-supervision learning
CN117992919A (en) River flood early warning method based on machine learning and multi-meteorological-mode fusion
CN114781455B (en) Training method of channel estimation model, channel estimation method, device and equipment
CN115424337A (en) Iris image restoration system based on priori guidance
CN115857359A (en) Preparation process and system of high-strength soil
CN115049765A (en) Model generation method, 3D hair style generation method, device, electronic equipment and storage medium
CN115131244A (en) Single image rain removing method and system based on counterstudy
CN115937565A (en) Hyperspectral image classification method based on self-adaptive L-BFGS algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination