CN118052814B - AI technology-based full-automatic specimen pretreatment system and method - Google Patents
AI technology-based full-automatic specimen pretreatment system and method Download PDFInfo
- Publication number
- CN118052814B CN118052814B CN202410444012.7A CN202410444012A CN118052814B CN 118052814 B CN118052814 B CN 118052814B CN 202410444012 A CN202410444012 A CN 202410444012A CN 118052814 B CN118052814 B CN 118052814B
- Authority
- CN
- China
- Prior art keywords
- feature map
- specimen
- image
- representing
- deep
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000005516 engineering process Methods 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000012545 processing Methods 0.000 claims abstract description 29
- 238000000605 extraction Methods 0.000 claims abstract description 25
- 238000013135 deep learning Methods 0.000 claims abstract description 8
- 230000003993 interaction Effects 0.000 claims description 130
- 239000013598 vector Substances 0.000 claims description 44
- 230000006870 function Effects 0.000 claims description 39
- 230000004927 fusion Effects 0.000 claims description 38
- 239000010410 layer Substances 0.000 claims description 31
- 239000011159 matrix material Substances 0.000 claims description 29
- 230000008569 process Effects 0.000 claims description 23
- 238000009826 distribution Methods 0.000 claims description 22
- 230000002708 enhancing effect Effects 0.000 claims description 17
- 239000011521 glass Substances 0.000 claims description 13
- 238000010186 staining Methods 0.000 claims description 12
- 230000000694 effects Effects 0.000 claims description 11
- 238000011176 pooling Methods 0.000 claims description 10
- 230000017105 transposition Effects 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 8
- 230000003287 optical effect Effects 0.000 claims description 8
- 239000003086 colorant Substances 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 6
- 238000002203 pretreatment Methods 0.000 claims description 6
- 238000005507 spraying Methods 0.000 claims description 6
- 239000002355 dual-layer Substances 0.000 claims description 5
- 238000007641 inkjet printing Methods 0.000 claims description 5
- 239000003623 enhancer Substances 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 238000002360 preparation method Methods 0.000 claims description 4
- 239000005315 stained glass Substances 0.000 claims description 4
- 238000004040 coloring Methods 0.000 claims description 2
- 238000012937 correction Methods 0.000 claims description 2
- 238000011160 research Methods 0.000 abstract description 6
- 238000013473 artificial intelligence Methods 0.000 description 32
- 238000010586 diagram Methods 0.000 description 10
- 238000004043 dyeing Methods 0.000 description 4
- 210000002569 neuron Anatomy 0.000 description 4
- 210000004027 cell Anatomy 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 239000007921 spray Substances 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000003850 cellular structure Anatomy 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000004171 remote diagnosis Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N1/00—Sampling; Preparing specimens for investigation
- G01N1/02—Devices for withdrawing samples
- G01N1/04—Devices for withdrawing samples in the solid state, e.g. by cutting
- G01N1/06—Devices for withdrawing samples in the solid state, e.g. by cutting providing a thin slice, e.g. microtome
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N1/00—Sampling; Preparing specimens for investigation
- G01N1/28—Preparing specimens for investigation including physical details of (bio-)chemical methods covered elsewhere, e.g. G01N33/50, C12Q
- G01N1/286—Preparing specimens for investigation including physical details of (bio-)chemical methods covered elsewhere, e.g. G01N33/50, C12Q involving mechanical work, e.g. chopping, disintegrating, compacting, homogenising
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N1/00—Sampling; Preparing specimens for investigation
- G01N1/28—Preparing specimens for investigation including physical details of (bio-)chemical methods covered elsewhere, e.g. G01N33/50, C12Q
- G01N1/30—Staining; Impregnating ; Fixation; Dehydration; Multistep processes for preparing samples of tissue, cell or nucleic acid material and the like for analysis
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N1/00—Sampling; Preparing specimens for investigation
- G01N1/28—Preparing specimens for investigation including physical details of (bio-)chemical methods covered elsewhere, e.g. G01N33/50, C12Q
- G01N1/30—Staining; Impregnating ; Fixation; Dehydration; Multistep processes for preparing samples of tissue, cell or nucleic acid material and the like for analysis
- G01N1/31—Apparatus therefor
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N1/00—Sampling; Preparing specimens for investigation
- G01N1/28—Preparing specimens for investigation including physical details of (bio-)chemical methods covered elsewhere, e.g. G01N33/50, C12Q
- G01N1/36—Embedding or analogous mounting of samples
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/0099—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor comprising robots or similar manipulators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N1/00—Sampling; Preparing specimens for investigation
- G01N1/28—Preparing specimens for investigation including physical details of (bio-)chemical methods covered elsewhere, e.g. G01N33/50, C12Q
- G01N1/286—Preparing specimens for investigation including physical details of (bio-)chemical methods covered elsewhere, e.g. G01N33/50, C12Q involving mechanical work, e.g. chopping, disintegrating, compacting, homogenising
- G01N2001/2873—Cutting or cleaving
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Analytical Chemistry (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Robotics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
The application discloses a full-automatic specimen pretreatment system and method based on an AI technology, which relate to the AI field, and are characterized in that a specimen image is subjected to multi-scale and multi-level feature extraction by utilizing an image processing technology based on deep learning and an intelligent algorithm, and category information about a specimen is mined, so that the automatic identification of the specimen type is realized by utilizing the implicit category feature information, and stronger support and convenience are provided for subsequent scientific research and experimental work.
Description
Technical Field
The application relates to the field of AI, and more particularly relates to a full-automatic specimen pretreatment system and method based on AI technology.
Background
Specimen pretreatment refers to subjecting a biological tissue or sample to a series of processing steps to ensure the quality and reliability of the sample prior to further analysis or detection of the sample. Specimen pretreatment typically includes steps of fixing, dehydrating, embedding, sectioning, staining, etc., which are intended to preserve the morphological structure, cellular structure, and chemical composition of the specimen so that the specimen may be used for subsequent microscopic observation, analysis, or other experimental manipulation.
In medical research and clinical diagnosis, the pretreatment process of specimens is critical. Conventional specimen pretreatment processes often require manual operations, which are not only time-consuming and labor-consuming, but can also lead to instability in specimen quality due to differences in skill level and experience of the operator.
With the development of Artificial Intelligence (AI) technology, AI has been widely used in various fields including medical image analysis, disease prediction, diagnosis, and the like. The development and application of AI technology provides a new idea for optimizing the pretreatment process of the specimen.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems.
According to one aspect of the present application, there is provided a fully automated specimen pretreatment system based on AI technology, comprising:
The sample acquisition module is used for acquiring a sample by using the robot arm and the intelligent camera according to a preset acquisition scheme;
A specimen type identification module for identifying a type of the specimen;
The sample preparation module is used for automatically carrying out fixing, dehydrating, embedding and slicing treatment on the sample according to the type of the sample and pasting the prepared sample on a glass slide;
The specimen staining module is used for selecting a proper stain, spraying the stain on the glass slide through the ink-jet printing head and detecting the staining effect through the optical sensor;
the specimen scanning module is used for scanning the stained glass slide into a digital image;
The sample analysis module is used for analyzing the digital image obtained by scanning and giving a corresponding report;
The specimen type identification module includes:
a specimen image acquisition unit for acquiring a specimen image acquired by the camera;
The double-layer feature extraction unit is used for extracting shallow features and deep features of the sample image to obtain a sample image shallow feature map and a sample image deep feature map;
The multi-scale feature fusion unit is used for fusing the shallow feature map of the sample image and the deep feature map of the sample image to obtain a multi-scale feature map of the sample image;
A type identification unit for determining a sample type of the specimen based on the sample image multi-scale feature map;
wherein the multi-scale feature fusion unit comprises:
The feature interaction focusing enhancement subunit is used for inputting the specimen image deep feature map into a triple interaction focusing module to obtain an enhanced specimen image deep feature map;
the attention fusion subunit is used for inputting the enhanced specimen image deep feature map and the specimen image shallow feature map into a global average pooling attention fusion module so as to obtain the specimen image multi-scale feature map;
Wherein the feature interaction concerns an enhancer unit comprising:
the triple interaction feature construction and extraction secondary subunit is used for constructing triple interaction features of the specimen image deep feature map to obtain a specimen image deep feature map after space dimension enhancement, a specimen image deep feature map after first interaction information enhancement and a specimen image deep feature map after second interaction information enhancement;
The fusion secondary subunit is used for fusing the sample image deep feature map after the space dimension enhancement, the sample image deep feature map after the first interaction information enhancement and the sample image deep feature map after the second interaction information enhancement to obtain the enhanced sample image deep feature map;
The triple interaction feature construction and extraction two-level subunit is used for:
Processing the specimen image deep feature map by using the following space dimension enhancement formula to obtain a specimen image deep feature map after the space dimension enhancement; wherein, the space dimension enhancement formula is:
;
;
Wherein, Is a spatial information weight matrix,/>For the deep characteristic map of the specimen image,/>Deep feature map of specimen image after enhancing the space dimension,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>A convolution operation is represented and is performed,Representing a ReLU function,/>Representing Sigmoid function,/>Representing the Hadamard product;
Processing the specimen image deep feature map by using a first space and channel information interaction formula to obtain a first interaction information enhanced specimen image deep feature map; the first space and channel information interaction formula is as follows:
;
;
Wherein, For the first space and channel information interaction weight matrix,/>For the deep characteristic map of the specimen image,/>Enhancing the deep feature map of the specimen image for the first interaction information,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>Representing convolution operations,/>Representing a ReLU function,/>Representing Sigmoid function,/>Representing Hadamard product,/>AndA transposition process of the characteristic map;
Processing the specimen image deep feature map by using a second space and channel information interaction formula to obtain a second interaction information enhanced specimen image deep feature map; the second space and channel information interaction formula is as follows:
;
;
Wherein, For the second space and channel information interaction weight matrix,/>For the deep characteristic map of the specimen image,/>For the second interaction information enhanced specimen image deep feature map,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>Representing convolution operations,/>Representing a ReLU function,/>Representing Sigmoid function,/>Representing Hadamard product,/>AndA transposition process of the feature map is shown.
In the above-mentioned full-automatic specimen pretreatment system based on AI technology, the double-layer feature extraction unit includes:
The image compensation subunit is used for carrying out brightness component compensation on the specimen image to obtain a brightness compensated specimen image;
And the image feature extraction subunit is used for extracting image features of the specimen image subjected to brightness compensation by using a deep learning network model so as to obtain a shallow feature map of the specimen image and a deep feature map of the specimen image.
In the above-mentioned fully automatic specimen pretreatment system based on AI technology, the image feature extraction subunit is configured to:
and passing the brightness compensated sample image through an image multi-scale feature extractor based on a pyramid network to obtain a shallow feature map of the sample image and a deep feature map of the sample image.
In the above-mentioned fully automatic specimen pretreatment system based on AI technology, the fusion secondary subunit is configured to:
processing the sample image deep feature map after the space dimension enhancement, the sample image deep feature map after the first interaction information enhancement and the sample image deep feature map after the second interaction information enhancement by using the following fusion formula to obtain the enhanced sample image deep feature map; wherein, the fusion formula is:
;
Wherein, Deep feature map of specimen image after enhancing the space dimension,/>Enhancing the deep feature map of the specimen image for the first interaction information,/>For the second interaction information enhanced specimen image deep feature map,/>For the enhanced specimen image deep feature map,/>Representing a cascade of feature maps.
In the above-mentioned fully automatic specimen pretreatment system based on AI technology, the attention fusion subunit is configured to:
Carrying out global average pooling on the enhanced specimen image deep feature map along the channel dimension to obtain an attention feature vector;
Passing the attention feature vector through a full connection layer to obtain an attention coding feature vector;
Taking each characteristic value in the attention coding characteristic vector as a weight to weight and multiply the shallow characteristic image of the sample image to obtain an attention adjustment characteristic image;
And carrying out position-by-position addition processing on the attention adjustment feature map and the enhanced specimen image deep feature map to obtain the sample image multi-scale feature map.
In the above-mentioned fully automatic specimen pretreatment system based on AI technology, the type identification unit includes:
The characteristic distribution correction subunit is used for carrying out characteristic distribution optimization on the sample image multi-scale characteristic vector obtained by expanding the sample image multi-scale characteristic map so as to obtain an optimized sample image multi-scale characteristic vector; and
And the sample type dividing and identifying subunit is used for enabling the multi-scale feature vector of the optimized sample image to pass through a classifier to obtain a classification result, wherein the classification result is used for representing a sample type label.
According to another aspect of the present application, there is provided a fully automatic specimen pretreatment method based on AI technology, comprising:
according to a preset acquisition scheme, acquiring a specimen by using a robot arm and an intelligent camera;
identifying a type of the specimen;
According to the type of the specimen, automatically fixing, dehydrating, embedding and slicing the specimen, and attaching the prepared specimen on a glass slide;
selecting a proper coloring agent, spraying the coloring agent on the glass slide through an ink-jet printing head, and detecting the coloring effect through an optical sensor;
scanning the stained glass slide into a digital image;
analyzing the digital image obtained by scanning and giving a corresponding report;
Wherein identifying the type of specimen comprises:
acquiring a specimen image acquired by a camera;
extracting shallow layer characteristics and deep layer characteristics of the sample image to obtain a sample image shallow layer characteristic map and a sample image deep layer characteristic map;
fusing the shallow feature map of the sample image and the deep feature map of the sample image to obtain a multi-scale feature map of the sample image;
determining a sample type of the specimen based on the sample image multi-scale feature map;
The method for obtaining the multi-scale feature map of the sample image by fusing the shallow feature map of the sample image and the deep feature map of the sample image comprises the following steps:
Inputting the specimen image deep feature map into a triple interaction focusing module to obtain an enhanced specimen image deep feature map;
inputting the enhanced specimen image deep feature map and the specimen image shallow feature map into a global average pooled attention fusion module to obtain the specimen image multi-scale feature map;
inputting the specimen image deep feature map into a triple interaction focusing module to obtain an enhanced specimen image deep feature map, wherein the method comprises the following steps of:
Constructing triple interaction features of the specimen image deep feature map to obtain a specimen image deep feature map after space dimension enhancement, a specimen image deep feature map after first interaction information enhancement and a specimen image deep feature map after second interaction information enhancement;
Fusing the sample image deep feature map after the space dimension enhancement, the sample image deep feature map after the first interaction information enhancement and the sample image deep feature map after the second interaction information enhancement to obtain the enhanced sample image deep feature map;
The method for constructing the triple interaction features of the specimen image deep feature map to obtain a space dimension enhanced specimen image deep feature map, a first interaction information enhanced specimen image deep feature map and a second interaction information enhanced specimen image deep feature map comprises the following steps:
Processing the specimen image deep feature map by using the following space dimension enhancement formula to obtain a specimen image deep feature map after the space dimension enhancement; wherein, the space dimension enhancement formula is:
;
;
Wherein, Is a spatial information weight matrix,/>For the deep characteristic map of the specimen image,/>Deep feature map of specimen image after enhancing the space dimension,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>A convolution operation is represented and is performed,Representing a ReLU function,/>Representing Sigmoid function,/>Representing the Hadamard product;
Processing the specimen image deep feature map by using a first space and channel information interaction formula to obtain a first interaction information enhanced specimen image deep feature map; the first space and channel information interaction formula is as follows:
;
;
Wherein, For the first space and channel information interaction weight matrix,/>For the deep characteristic map of the specimen image,/>Enhancing the deep feature map of the specimen image for the first interaction information,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>Representing convolution operations,/>Representing a ReLU function,/>Representing Sigmoid function,/>Representing Hadamard product,/>AndA transposition process of the characteristic map;
Processing the specimen image deep feature map by using a second space and channel information interaction formula to obtain a second interaction information enhanced specimen image deep feature map; the second space and channel information interaction formula is as follows:
;
;
Wherein, For the second space and channel information interaction weight matrix,/>For the deep characteristic map of the specimen image,/>For the second interaction information enhanced specimen image deep feature map,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>Representing convolution operations,/>Representing a ReLU function,/>Representing Sigmoid function,/>Representing Hadamard product,/>AndA transposition process of the feature map is shown.
Compared with the prior art, the full-automatic specimen pretreatment system and method based on the AI technology provided by the application have the advantages that the image processing technology based on the deep learning and the intelligent algorithm are utilized to carry out multi-scale and multi-level feature extraction on the specimen image, and the category information about the specimen is mined, so that the automatic identification of the specimen type is realized by utilizing the implicit category feature information, and more powerful support and convenience are provided for subsequent scientific research and experimental work.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a block diagram of a fully automated specimen pretreatment system based on AI technology in accordance with an embodiment of the application;
FIG. 2 is a system architecture diagram of a fully automated specimen pretreatment system based on AI technology in accordance with an embodiment of the application;
FIG. 3 is a block diagram of a specimen type identification module in a fully automated specimen pretreatment system based on AI technology in accordance with an embodiment of the application;
FIG. 4 is a block diagram of a dual-layer feature extraction unit in a fully automated specimen pretreatment system based on AI technology in accordance with an embodiment of the application;
FIG. 5 is a block diagram of a multi-scale feature fusion unit in a fully automated specimen pre-processing system based on AI technology in accordance with an embodiment of the application;
FIG. 6 is a block diagram of a feature interaction focus enhancement subunit in a fully automated specimen pretreatment system based on AI technology in accordance with an embodiment of the application;
FIG. 7 is a block diagram of a type identification unit in a fully automated specimen pretreatment system based on AI technology in accordance with an embodiment of the application;
Fig. 8 is a flowchart of a fully automatic specimen pretreatment method based on AI technology according to an embodiment of the present application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
A flowchart is used in the present application to describe the operations performed by a system according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
In the processing process of the specimen, as different types of biological tissues and specimen samples have different structures and chemical compositions, different modes are needed to be adopted in the processing process to better protect the morphological structure and the cell structure of the samples, and the integrity and the reliability of the samples are ensured. In addition, the different types of specimens have different absorption and dyeing effects on the dye, and a clearer and accurate dyeing result can be obtained by selecting a dyeing mode suitable for the type of specimens according to the types of the specimens. That is, in performing fixing, dehydrating, embedding, slicing, and staining processes of a specimen, it is necessary to accurately judge and identify the type of the specimen. Existing specimen type identification generally relies on manual judgment, and the following problems exist in this way: 1. the subjective and individual differences exist in manual identification, and different operators can make different judgments, so that the results are inconsistent. 2. The manual identification requires a lot of time and manpower resources, and is low in efficiency for large-scale specimen processing work. Thus, an optimized solution is desired.
In the technical scheme of the application, a full-automatic specimen pretreatment system based on an AI technology is provided. Fig. 1 is a block diagram of a fully automated specimen pretreatment system based on AI technology in accordance with an embodiment of the present application. Fig. 2 is a system architecture diagram of a fully automated specimen pretreatment system based on AI technology according to an embodiment of the present application. As shown in fig. 1 and 2, a fully automated specimen pretreatment system 300 based on AI technology according to an embodiment of the present application includes: the specimen collection module 310 is configured to collect a specimen using the robot arm and the intelligent camera according to a preset collection scheme; a specimen-type identifying module 320 for identifying a type of the specimen; a specimen preparation module 330 for automatically performing fixing, dehydrating, embedding and slicing processes of the specimen according to the type of the specimen, and attaching the prepared specimen to a slide glass; a specimen staining module 340 for selecting a suitable stain, spraying the stain onto the slide by an inkjet printhead, and detecting a staining effect by an optical sensor; a specimen scanning module 350 for scanning the stained slide into a digital image; the specimen analysis module 360 is configured to analyze the scanned digital image and give a corresponding report.
In particular, the specimen collection module 310 is configured to collect a specimen using a robotic arm and a smart camera according to a preset collection scheme. It should be understood that accurate location and control can be realized by utilizing the robot arm and the intelligent camera to collect the sample, so that the accurate position and angle of the sample in the collection process are ensured, the working efficiency is improved, and the time and labor intensity of manual operation are reduced.
In particular, the specimen type identification module 320 is configured to identify the type of the specimen. In particular, in one specific example of the present application, as shown in fig. 3, the specimen type recognition module 320 includes: a specimen image acquisition unit 321 for acquiring a specimen image acquired by the camera; a dual-layer feature extraction unit 322, configured to extract shallow features and deep features of the sample image to obtain a shallow feature map of the sample image and a deep feature map of the sample image; a multi-scale feature fusion unit 323, configured to fuse the shallow feature map of the sample image and the deep feature map of the sample image to obtain a multi-scale feature map of the sample image; a type recognition unit 324 for determining a sample type of the specimen based on the sample image multi-scale feature map.
Specifically, the specimen image acquisition unit 321 is configured to acquire a specimen image acquired by a camera. The specimen image refers to a visual representation of a specimen captured by a camera, and can be an image of biological tissue, cells, microstructure or other samples used in scientific experiments or medical researches.
Specifically, the dual-layer feature extraction unit 322 is configured to extract shallow features and deep features of the sample image to obtain a shallow feature map of the sample image and a deep feature map of the sample image. In particular, in one specific example of the present application, as shown in fig. 4, the dual-layer feature extraction unit 322 includes: an image compensation subunit 3221, configured to perform brightness component compensation on the specimen image to obtain a brightness-compensated specimen image; and an image feature extraction subunit 3222, configured to perform image feature extraction on the brightness-compensated sample image by using a deep learning network model to obtain the shallow feature map and the deep feature map of the sample image.
More specifically, the image compensation subunit 3221 is configured to perform brightness component compensation on the specimen image to obtain a brightness-compensated specimen image. Considering that the sample image may be affected by environmental factors in the process of acquisition, the brightness of different areas is different, so that partial areas are too dark or too bright, and subsequent observation and image analysis are affected. In particular, insufficient brightness may result in unclear details of the image, making certain features illegible. Accordingly, in the aspect of the present application, it is desirable to perform luminance component compensation on the sample image to adjust and correct the luminance component in the sample image, thereby obtaining a luminance-compensated sample image. Therefore, the brightness of the sample image after brightness compensation can be balanced through brightness component compensation, details in the sample image after brightness compensation are clearer, and the visual quality and the analyzability of the image are improved.
More specifically, the image feature extraction subunit 3222 is configured to perform image feature extraction on the luminance compensated specimen image by using a deep learning network model to obtain the shallow feature map and the deep feature map of the specimen image. That is, in the technical scheme of the application, the specimen image after brightness compensation is passed through an image multi-scale feature extractor based on a pyramid network to obtain a specimen image shallow feature map and a specimen image deep feature map. The pyramid network is a deep learning network structure, and the design inspiration of the pyramid network is derived from a pyramid representation method in image processing. Pyramid representation refers to obtaining a series of images of different resolutions by downsampling or upsampling the image multiple times at different scales. Pyramid networks also reference this concept, introducing feature extraction processes at multiple scales into the network to learn and extract features at different scales. In an embodiment of the application, the pyramid network has a deep structure, and network layers of different levels can learn abstract features of different levels. By extracting features at different levels, the network can capture multi-level image features from low-level to high-level, enabling a more comprehensive and thorough understanding of image content. Thus, the depth structure of the pyramid network enables it to extract abstract features layer by layer, shallow features characterize the underlying information of the image, such as edge and texture information, while deep features contain higher-level semantic information, such as specimen shape and class, etc.
It should be noted that, in other specific examples of the present application, the shallow features and the deep features of the sample image may be extracted by other ways to obtain a shallow feature map of the sample image and a deep feature map of the sample image, for example: inputting the specimen image; selecting a pre-trained convolutional neural network model; extracting shallow features by selecting a convolution layer and a pooling layer of a model, wherein the features generally capture low-level features such as edges, textures and the like of an image so as to obtain a shallow feature map of the sample image; continuing to use the same or another pre-trained deep CNN model; deep features are extracted by selecting a model deeper convolutional layer, which typically captures higher level semantic information to arrive at the specimen image deep feature map.
Specifically, the multi-scale feature fusion unit 323 is configured to fuse the shallow feature map of the sample image and the deep feature map of the sample image to obtain a multi-scale feature map of the sample image. In particular, in one specific example of the present application, as shown in fig. 5, the multi-scale feature fusion unit 323 includes: the feature interaction attention enhancing subunit 3231 is configured to input the specimen image deep feature map into a triple interaction attention module to obtain an enhanced specimen image deep feature map; the attention fusion subunit 3232 is configured to input the enhanced specimen image deep feature map and the specimen image shallow feature map into a global average pooled attention fusion module to obtain the specimen image multi-scale feature map.
More specifically, the feature interaction focus enhancer unit 3231 is configured to input the specimen image deep feature map into a triple interaction focus module to obtain an enhanced specimen image deep feature map. Considering that the sample image deep feature map is extracted by subjecting the sample image after brightness compensation to deep convolution encoding of the pyramid network-based image multi-scale feature extractor, wherein implicit associated features of the contained sample image are limited by the size of a convolution kernel. That is, the feature distribution of the deep feature map of the specimen image can only characterize and characterize the spatial local neighborhood associated features in the brightness compensated specimen image, and lacks more global and multidimensional feature interaction information. Therefore, in the technical scheme of the application, the specimen image deep feature map is further input into a triple interaction focusing module to obtain the enhanced specimen image deep feature map. The triple interaction focusing module can strengthen the representation of the specimen image deep feature map in the space dimension and simultaneously can also consider interaction between information in the space dimension and the channel dimension, so that the network can focus on key areas and structures in the image better by adjusting importance weights of different positions in the specimen image deep feature map, the specimen image deep feature map can carry out more effective information transfer and interaction in the space dimension and the channel dimension, and the characteristic representation capability and diversity of features are improved. In particular, in one specific example of the present application, as shown in fig. 6, the feature interactive attention enhancer unit 3231 includes: the triple interaction feature construction and extraction secondary subunit 32311 is used for constructing triple interaction features of the specimen image deep feature map to obtain a specimen image deep feature map after space dimension enhancement, a specimen image deep feature map after first interaction information enhancement and a specimen image deep feature map after second interaction information enhancement; and a fusion secondary sub-unit 32312, configured to fuse the sample image deep feature map after spatial dimension enhancement, the sample image deep feature map after first interaction information enhancement, and the sample image deep feature map after second interaction information enhancement to obtain the enhanced sample image deep feature map.
The triple interaction feature construction and extraction secondary subunit 32311 is configured to construct triple interaction features of the specimen image deep feature map to obtain a space-dimension enhanced specimen image deep feature map, a first interaction information enhanced specimen image deep feature map, and a second interaction information enhanced specimen image deep feature map. Specifically, in one specific example of the present application, the specimen image deep feature map is processed with the following spatial dimension enhancement formula to obtain the spatial dimension enhanced specimen image deep feature map; wherein, the space dimension enhancement formula is:
;
;
Wherein, Is a spatial information weight matrix,/>For the deep characteristic map of the specimen image,/>Deep feature map of specimen image after enhancing the space dimension,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>A convolution operation is represented and is performed,Representing a ReLU function,/>Representing Sigmoid function,/>Representing the Hadamard product;
Processing the specimen image deep feature map by using a first space and channel information interaction formula to obtain a first interaction information enhanced specimen image deep feature map; the first space and channel information interaction formula is as follows:
;
;
Wherein, For the first space and channel information interaction weight matrix,/>For the deep characteristic map of the specimen image,/>Enhancing the deep feature map of the specimen image for the first interaction information,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>Representing convolution operations,/>Representing a ReLU function,/>Representing Sigmoid function,/>Representing Hadamard product,/>AndA transposition process of the characteristic map; processing the specimen image deep feature map by using a second space and channel information interaction formula to obtain a second interaction information enhanced specimen image deep feature map; the second space and channel information interaction formula is as follows:
;
;
Wherein, For the second space and channel information interaction weight matrix,/>For the deep characteristic map of the specimen image,/>For the second interaction information enhanced specimen image deep feature map,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>Representing convolution operations,/>Representing a ReLU function,/>Representing Sigmoid function,/>Representing Hadamard product,/>AndA transposition process of the feature map is shown.
The fusion secondary sub-unit 32312 is configured to fuse the deep feature map of the specimen image after spatial dimension enhancement, the deep feature map of the specimen image after first interaction information enhancement, and the deep feature map of the specimen image after second interaction information enhancement to obtain the deep feature map of the enhanced specimen image. Specifically, in one specific example of the present application, the specimen image deep feature map after spatial dimension enhancement, the first specimen image deep feature map after interaction information enhancement, and the second specimen image deep feature map after interaction information enhancement are processed in the following fusion formula to obtain the enhanced specimen image deep feature map; wherein, the fusion formula is:
;
Wherein, Deep feature map of specimen image after enhancing the space dimension,/>Enhancing the deep feature map of the specimen image for the first interaction information,/>For the second interaction information enhanced specimen image deep feature map,/>For the enhanced specimen image deep feature map,/>Representing a cascade of feature maps.
It should be noted that, in other specific examples of the present application, the specimen image deep feature map may also be input into a triple interaction focusing module in other manners to obtain an enhanced specimen image deep feature map, for example: inputting the deep feature map of the specimen image; triple interaction focus modules typically include three key components: query, key, value calculation, and interactive attention weight calculation; performing three-time linear transformation on the deep feature map; calculating a similarity matrix between the query matrix and the key matrix; obtaining an attention weight matrix by carrying out Softmax operation on the similarity matrix; multiplying the attention weight matrix by the value matrix to obtain a weighted feature matrix; and processing by a triple interaction focusing module to obtain the enhanced specimen image deep feature map.
More specifically, the attention fusion subunit 3232 is configured to input the enhanced specimen image deep feature map and the specimen image shallow feature map into a global average pooled attention fusion module to obtain the specimen image multi-scale feature map. That is, in one specific example of the present application, the enhanced specimen image deep feature map and the specimen image shallow feature map are input into a global average pooled attention fusion module to obtain a specimen image multi-scale feature map. That is, feature information with different emphasis points and different depths is fused by the global average pooled attention fusion module, and attention mechanisms are introduced to enhance attention to important features. Specifically, the implementation process of the global average pooling attention fusion module is to perform global average pooling on the enhanced specimen image deep feature map serving as high-level features in a channel dimension, and at this time, the high-dimensional feature map is compressed into a vector representation, and the vector representation has a global receptive field of the enhanced specimen image deep feature map. And fusing the vector representation as attention information with the sample image shallow feature map to guide shallow information in the sample image shallow feature map to restore semantic category information, so as to obtain the sample image multi-scale feature map. In this way, the sample image multi-scale feature map will have a richer representation of the features. More specifically, inputting the enhanced specimen image deep feature map and the specimen image shallow feature map into a global average pooled attention fusion module to obtain the specimen image multi-scale feature map, comprising: carrying out global average pooling on the enhanced specimen image deep feature map along the channel dimension to obtain an attention feature vector; passing the attention feature vector through a full connection layer to obtain an attention coding feature vector; taking each characteristic value in the attention coding characteristic vector as a weight to weight and multiply the shallow characteristic image of the sample image to obtain an attention adjustment characteristic image; and carrying out position-by-position addition processing on the attention adjustment feature map and the enhanced specimen image deep feature map to obtain the sample image multi-scale feature map.
It should be noted that, in other specific examples of the present application, the shallow feature map of the specimen image and the deep feature map of the specimen image may be fused in other manners to obtain a multi-scale feature map of the specimen image, for example: inputting the shallow feature map of the specimen image and the deep feature map of the specimen image; ensuring that the shallow feature map and the deep feature map have the same spatial scale; in a specific example, the shallow feature map and the deep feature map may be connected by a cascade fusion manner according to channel levels; the weight can be learned or manually set according to task requirements by carrying out weighted summation on the shallow feature map and the deep feature map; dynamically adjusting the importance of the shallow feature map and the deep feature map by using an attention mechanism; finally, the multi-scale characteristic diagram of the sample image is obtained.
Specifically, the type identifying unit 324 is configured to determine a sample type of the specimen based on the multi-scale feature map of the sample image. In particular, in one specific example of the present application, as shown in fig. 7, the type identifying unit 324 includes: the feature distribution corrector unit 3241 is configured to perform feature distribution optimization on a sample image multi-scale feature vector obtained by expanding the sample image multi-scale feature map to obtain an optimized sample image multi-scale feature vector; the sample type classification and identification subunit 3242 passes the optimized sample image multi-scale feature vector through a classifier to obtain a classification result, where the classification result is used to represent a sample type label.
More specifically, the feature distribution corrector unit 3241 is configured to perform feature distribution optimization on a sample image multi-scale feature vector obtained by expanding the sample image multi-scale feature map to obtain an optimized sample image multi-scale feature vector. In the technical scheme of the application, the sample image shallow feature map and the sample image deep feature map respectively express image semantic features of different scales and different depths of the sample image after brightness compensation based on a pyramid network, and after the sample image deep feature map is input into a triple interaction focusing module, the obtained enhanced sample image deep feature map can carry out space-channel dimension interaction enhancement based on interaction focusing of space dimension in a feature matrix and channel dimension between feature matrices, so that significant channel distribution mode difference exists between the enhanced sample image deep feature map and the sample image shallow feature map, and therefore, after the enhanced sample image deep feature map and the sample image shallow feature map are input into a global average pooling attention fusion module, the obtained sample image multi-scale feature map also has significant channel distribution differential representation on the channel dimension, so that the distribution integrity of the sample image multi-scale feature map taking feature distribution as a unit is reduced, and the sample image multi-scale feature map is influenced by the class probability convergence effect of a classifier, namely the accuracy of classification result is influenced.
Therefore, the applicant optimizes the multi-scale feature vector of the sample image when the multi-scale feature vector of the sample image after the multi-scale feature map of the sample image is unfolded is subjected to classification iteration through a classifier, and the multi-scale feature vector is expressed as:
;
;
Wherein, Representing the multi-scale feature vector of the sample image,/>And/>The/>, respectively, of the multi-scale feature vector of the sample imageAnd/>Characteristic value of location,/>Representing a first intermediate matrix,/>Representing a second intermediate matrix,/>Representing/> -of the first intermediate matrixCharacteristic value of location,/>Representing/> -of the second intermediate matrixCharacteristic value of location,/>Representing addition by position,/>Representing matrix multiplication,/>Representing the optimized sample image multi-scale feature vector.
That is, by introducing the sample image multiscale feature vectorIs used as an external information source for carrying out the retrieval enhancement of the feature vector so as to avoid the multi-scale feature vector/>, caused by the local overflow information distribution, of the sample image based on the local statistics intensive information structuringTo obtain the multi-scale feature vector/>, of the sample imageInformation trusted response reasoning based on local distribution group dimension retention to obtain the sample image multi-scale feature vector/>The reliable distribution response in the probability density space based on the discretized local feature distribution is improved, so that the probability density space convergence effect is improved, and the training speed and the accuracy of training results are improved.
More specifically, the sample type classification and identification subunit 3242 passes the optimized sample image multi-scale feature vector through a classifier to obtain a classification result, where the classification result is used to represent a sample type label. The sample type label is an identifier or label used for describing the category or type of the sample. It is used to distinguish between different classes of samples, helping the model to identify and classify the input data. The sample type tags are typically predefined categories or sets of categories, each category corresponding to a unique tag. In this way, the sample image semantic multi-scale features expressed by the sample image multi-scale feature map are converted into specific category information through the classifier, so that automatic classification and identification of samples are realized. In a specific example, passing the optimized sample image multi-scale feature vector through a classifier to obtain a classification result includes: performing full-connection coding on the multi-scale feature vectors of the optimized sample image by using a plurality of full-connection layers of the classifier to obtain coded classification feature vectors; and passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
A classifier refers to a machine learning model or algorithm that is used to classify input data into different categories or labels. The classifier is part of supervised learning, which performs classification tasks by learning mappings from input data to output categories.
Fully connected layers are one type of layer commonly found in neural networks. In the fully connected layer, each neuron is connected to all neurons of the upper layer, and each connection has a weight. This means that each neuron in the fully connected layer receives inputs from all neurons in the upper layer, and weights these inputs together, and then passes the result to the next layer.
The Softmax classification function is a commonly used activation function for multi-classification problems. It converts each element of the input vector into a probability value between 0 and 1, and the sum of these probability values equals 1. The Softmax function is commonly used at the output layer of a neural network, and is particularly suited for multi-classification problems, because it can map the network output into probability distributions for individual classes. During the training process, the output of the Softmax function may be used to calculate the loss function and update the network parameters through a back propagation algorithm. Notably, the output of the Softmax function does not change the relative magnitude relationship between elements, but rather normalizes them. Thus, the Softmax function does not change the characteristics of the input vector, but simply converts it into a probability distribution form.
It should be noted that, in other specific examples of the present application, the sample type of the specimen may also be determined based on the multi-scale feature map of the sample image in other manners, for example: inputting the multi-scale feature map of the sample image; performing global pooling operation on the multi-scale feature map, and summarizing the features of the space dimension to obtain global feature representation of each channel; fusing the global feature representation with the original multi-scale feature map to preserve more spatial information; inputting the fused features into a full-connection layer, and performing feature mapping and nonlinear transformation; the last layer uses Softmax activation function to map features into probability distribution of each category; inputting the multi-scale feature map of the sample into a trained model for reasoning to obtain probability distribution of each category; determining the type of the sample according to the probability distribution, and generally selecting the category with the highest probability as the prediction type of the sample; the type of sample is determined based on the highest probability class.
In particular, the specimen preparation module 330 is configured to automatically perform fixing, dehydrating, embedding and slicing processes of the specimen according to the type of the specimen, and attach the prepared specimen to a slide glass. It should be appreciated that fixation maintains the integrity of the tissue structure, preventing biomolecules in the tissue from being destroyed or lost during processing; dewatering can reduce the moisture content in the tissue, which is beneficial to subsequent embedding and slicing treatment; embedding can protect tissue structures so that they can be accurately cut into slices in a microtome; slicing the embedded tissue specimen into slices for microscopic examination and analysis; finally, the sections are mounted on slides and fixed to the slides for microscopic examination.
In particular, the specimen staining module 340 is configured to select an appropriate stain, and spray the stain onto the slide with an inkjet printhead, and detect the staining effect with an optical sensor. It will be appreciated that the selection of suitable colorants can help highlight particular cells or components in the tissue structure, making it easier to observe and analyze; the use of an inkjet printhead can precisely spray the stain onto the slide to ensure that the stain uniformly covers the slide surface and can control the amount and location of the spray of the stain; finally, the optical sensor can detect the effects of staining on the slide, including uniformity of staining, color saturation, and sharpness, among others.
In particular, the specimen scanning module 350 is configured to scan the stained slide into a digital image. It should be appreciated that by scanning the stained slide into a digital image, digital preservation of the sample, high quality image acquisition, remote diagnosis and computer-aided analysis can be achieved, thereby improving the accuracy and efficiency of scientific and experimental work.
In particular, the specimen analysis module 360 is configured to analyze the scanned digital image and provide a corresponding report. It should be appreciated that by analyzing the scanned digital images and giving corresponding reports, important support can be provided for medical diagnosis, research and education, helping medical teams to better understand pathology information and make accurate diagnosis and treatment decisions.
As described above, the AI-technology-based fully automatic specimen pretreatment system 300 according to the embodiment of the present application can be implemented in various wireless terminals, such as a server or the like having an AI-technology-based fully automatic specimen pretreatment algorithm. In one possible implementation, the AI-technology-based fully automated specimen pretreatment system 300 according to an embodiment of the present application may be integrated into a wireless terminal as one software module and/or hardware module. For example, the AI technology-based fully automated specimen pretreatment system 300 may be a software module in the operating system of the wireless terminal, or may be an application developed for the wireless terminal; of course, the AI-based fully automated specimen pretreatment system 300 could equally be one of many hardware modules of the wireless terminal.
Alternatively, in another example, the AI-technology-based fully-automatic specimen pretreatment system 300 and the wireless terminal may be separate devices, and the AI-technology-based fully-automatic specimen pretreatment system 300 may be connected to the wireless terminal through a wired and/or wireless network, and transmit the interactive information in a agreed data format.
Further, a full-automatic specimen pretreatment method based on AI technology is provided.
Fig. 8 is a flowchart of a fully automatic specimen pretreatment method based on AI technology according to an embodiment of the present application. As shown in fig. 8, the fully automatic specimen pretreatment method based on AI technology according to the embodiment of the application includes the steps of: s1, collecting a specimen by using a robot arm and an intelligent camera according to a preset collection scheme; s2, identifying the type of the specimen; s3, according to the type of the specimen, automatically fixing, dehydrating, embedding and slicing the specimen, and attaching the prepared specimen on a glass slide; s4, selecting a proper coloring agent, spraying the coloring agent on the glass slide through an ink-jet printing head, and detecting the dyeing effect through an optical sensor; s5, scanning the dyed glass slide into a digital image; s6, analyzing the digital image obtained by scanning and giving a corresponding report.
In summary, the full-automatic specimen preprocessing method based on the AI technology according to the embodiment of the application is explained, which extracts multi-scale and multi-level characteristics of a specimen image by utilizing an image processing technology based on deep learning and an intelligent algorithm, and extracts category information about the specimen therefrom, thereby realizing automatic recognition of the specimen type by utilizing the implicit category characteristic information and providing more powerful support and convenience for subsequent scientific research and experimental work.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (7)
1. A fully automated specimen pretreatment system based on AI technology, comprising: the sample acquisition module is used for acquiring a sample by using the robot arm and the intelligent camera according to a preset acquisition scheme; a specimen type identification module for identifying a type of the specimen; the sample preparation module is used for automatically carrying out fixing, dehydrating, embedding and slicing treatment on the sample according to the type of the sample and pasting the prepared sample on a glass slide; the specimen staining module is used for selecting a proper stain, spraying the stain on the glass slide through the ink-jet printing head and detecting the staining effect through the optical sensor; the specimen scanning module is used for scanning the stained glass slide into a digital image; the sample analysis module is used for analyzing the digital image obtained by scanning and giving a corresponding report;
The specimen type recognition module is characterized by comprising:
a specimen image acquisition unit for acquiring a specimen image acquired by the camera;
The double-layer feature extraction unit is used for extracting shallow features and deep features of the sample image to obtain a sample image shallow feature map and a sample image deep feature map;
The multi-scale feature fusion unit is used for fusing the shallow feature map of the sample image and the deep feature map of the sample image to obtain a multi-scale feature map of the sample image;
A type identification unit for determining a sample type of the specimen based on the sample image multi-scale feature map;
wherein the multi-scale feature fusion unit comprises:
The feature interaction focusing enhancement subunit is used for inputting the specimen image deep feature map into a triple interaction focusing module to obtain an enhanced specimen image deep feature map;
the attention fusion subunit is used for inputting the enhanced specimen image deep feature map and the specimen image shallow feature map into a global average pooling attention fusion module so as to obtain the specimen image multi-scale feature map;
Wherein the feature interaction concerns an enhancer unit comprising:
the triple interaction feature construction and extraction secondary subunit is used for constructing triple interaction features of the specimen image deep feature map to obtain a specimen image deep feature map after space dimension enhancement, a specimen image deep feature map after first interaction information enhancement and a specimen image deep feature map after second interaction information enhancement;
The fusion secondary subunit is used for fusing the sample image deep feature map after the space dimension enhancement, the sample image deep feature map after the first interaction information enhancement and the sample image deep feature map after the second interaction information enhancement to obtain the enhanced sample image deep feature map;
The triple interaction feature construction and extraction two-level subunit is used for:
Processing the specimen image deep feature map by using the following space dimension enhancement formula to obtain a specimen image deep feature map after the space dimension enhancement; wherein, the space dimension enhancement formula is:
;
;
Wherein, Is a spatial information weight matrix,/>For the deep characteristic map of the specimen image,/>Deep feature map of specimen image after enhancing the space dimension,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>Representing convolution operations,/>Representing a ReLU function,/>Representing Sigmoid function,/>Representing the Hadamard product;
Processing the specimen image deep feature map by using a first space and channel information interaction formula to obtain a first interaction information enhanced specimen image deep feature map; the first space and channel information interaction formula is as follows:
;
;
Wherein, For the first space and channel information interaction weight matrix,/>For the deep characteristic map of the specimen image,/>Enhancing the deep feature map of the specimen image for the first interaction information,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>Representing convolution operations,/>Representing a ReLU function,/>Representing Sigmoid function,/>Representing Hadamard product,/>AndA transposition process of the characteristic map;
Processing the specimen image deep feature map by using a second space and channel information interaction formula to obtain a second interaction information enhanced specimen image deep feature map; the second space and channel information interaction formula is as follows:
;
;
Wherein, For the second space and channel information interaction weight matrix,/>For the deep characteristic map of the specimen image,/>For the second interaction information enhanced specimen image deep feature map,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>Representing convolution operations,/>Representing a ReLU function,/>Representing Sigmoid function,/>Representing Hadamard product,/>AndA transposition process of the feature map is shown.
2. The AI-technology-based fully automated specimen pretreatment system of claim 1, wherein the dual-layer feature extraction unit comprises:
The image compensation subunit is used for carrying out brightness component compensation on the specimen image to obtain a brightness compensated specimen image;
And the image feature extraction subunit is used for extracting image features of the specimen image subjected to brightness compensation by using a deep learning network model so as to obtain a shallow feature map of the specimen image and a deep feature map of the specimen image.
3. The AI-technology-based fully automated specimen pretreatment system of claim 2, wherein the image feature extraction subunit is configured to:
and passing the brightness compensated sample image through an image multi-scale feature extractor based on a pyramid network to obtain a shallow feature map of the sample image and a deep feature map of the sample image.
4. The AI-technology-based fully automated specimen pretreatment system of claim 3, wherein the fusion secondary subunit is configured to:
processing the sample image deep feature map after the space dimension enhancement, the sample image deep feature map after the first interaction information enhancement and the sample image deep feature map after the second interaction information enhancement by using the following fusion formula to obtain the enhanced sample image deep feature map; wherein, the fusion formula is:
;
Wherein, Deep feature map of specimen image after enhancing the space dimension,/>Enhancing the deep feature map of the specimen image for the first interaction information,/>For the second interaction information enhanced specimen image deep feature map,/>For the enhanced specimen image deep feature map,/>Representing a cascade of feature maps.
5. The AI-technology-based fully automated specimen pretreatment system of claim 4, wherein the attention fusion subunit is configured to:
Carrying out global average pooling on the enhanced specimen image deep feature map along the channel dimension to obtain an attention feature vector;
Passing the attention feature vector through a full connection layer to obtain an attention coding feature vector;
Taking each characteristic value in the attention coding characteristic vector as a weight to weight and multiply the shallow characteristic image of the sample image to obtain an attention adjustment characteristic image;
And carrying out position-by-position addition processing on the attention adjustment feature map and the enhanced specimen image deep feature map to obtain the sample image multi-scale feature map.
6. The AI-technology-based fully automated specimen pretreatment system of claim 5, wherein the type recognition unit comprises:
The characteristic distribution correction subunit is used for carrying out characteristic distribution optimization on the sample image multi-scale characteristic vector obtained by expanding the sample image multi-scale characteristic map so as to obtain an optimized sample image multi-scale characteristic vector; and
And the sample type dividing and identifying subunit is used for enabling the multi-scale feature vector of the optimized sample image to pass through a classifier to obtain a classification result, wherein the classification result is used for representing a sample type label.
7. A full-automatic specimen pretreatment method based on AI technology comprises the following steps:
according to a preset acquisition scheme, acquiring a specimen by using a robot arm and an intelligent camera;
identifying a type of the specimen;
According to the type of the specimen, automatically fixing, dehydrating, embedding and slicing the specimen, and attaching the prepared specimen on a glass slide;
selecting a proper coloring agent, spraying the coloring agent on the glass slide through an ink-jet printing head, and detecting the coloring effect through an optical sensor;
scanning the stained glass slide into a digital image;
analyzing the digital image obtained by scanning and giving a corresponding report;
wherein identifying the type of specimen comprises:
acquiring a specimen image acquired by a camera;
extracting shallow layer characteristics and deep layer characteristics of the sample image to obtain a sample image shallow layer characteristic map and a sample image deep layer characteristic map;
fusing the shallow feature map of the sample image and the deep feature map of the sample image to obtain a multi-scale feature map of the sample image;
determining a sample type of the specimen based on the sample image multi-scale feature map;
The method for obtaining the multi-scale feature map of the sample image by fusing the shallow feature map of the sample image and the deep feature map of the sample image comprises the following steps:
Inputting the specimen image deep feature map into a triple interaction focusing module to obtain an enhanced specimen image deep feature map;
inputting the enhanced specimen image deep feature map and the specimen image shallow feature map into a global average pooled attention fusion module to obtain the specimen image multi-scale feature map;
inputting the specimen image deep feature map into a triple interaction focusing module to obtain an enhanced specimen image deep feature map, wherein the method comprises the following steps of:
Constructing triple interaction features of the specimen image deep feature map to obtain a specimen image deep feature map after space dimension enhancement, a specimen image deep feature map after first interaction information enhancement and a specimen image deep feature map after second interaction information enhancement;
Fusing the sample image deep feature map after the space dimension enhancement, the sample image deep feature map after the first interaction information enhancement and the sample image deep feature map after the second interaction information enhancement to obtain the enhanced sample image deep feature map;
The method for constructing the triple interaction features of the specimen image deep feature map to obtain a space dimension enhanced specimen image deep feature map, a first interaction information enhanced specimen image deep feature map and a second interaction information enhanced specimen image deep feature map comprises the following steps:
Processing the specimen image deep feature map by using the following space dimension enhancement formula to obtain a specimen image deep feature map after the space dimension enhancement; wherein, the space dimension enhancement formula is:
;
;
Wherein, Is a spatial information weight matrix,/>For the deep characteristic map of the specimen image,/>Deep feature map of specimen image after enhancing the space dimension,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>Representing convolution operations,/>Representing a ReLU function,/>Representing Sigmoid function,/>Representing the Hadamard product;
Processing the specimen image deep feature map by using a first space and channel information interaction formula to obtain a first interaction information enhanced specimen image deep feature map; the first space and channel information interaction formula is as follows:
;
;
Wherein, For the first space and channel information interaction weight matrix,/>For the deep characteristic map of the specimen image,/>Enhancing the deep feature map of the specimen image for the first interaction information,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>Representing convolution operations,/>Representing a ReLU function,/>Representing Sigmoid function,/>Representing Hadamard product,/>AndA transposition process of the characteristic map;
Processing the specimen image deep feature map by using a second space and channel information interaction formula to obtain a second interaction information enhanced specimen image deep feature map; the second space and channel information interaction formula is as follows:
;
;
Wherein, For the second space and channel information interaction weight matrix,/>For the deep characteristic map of the specimen image,/>For the second interaction information enhanced specimen image deep feature map,/>Representing a convolution of 1 x 1,/>Representing a 7 x 7 convolution,/>Representing convolution operations,/>Representing a ReLU function,/>Representing Sigmoid function,/>Representing Hadamard product,/>AndA transposition process of the feature map is shown.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410444012.7A CN118052814B (en) | 2024-04-15 | 2024-04-15 | AI technology-based full-automatic specimen pretreatment system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410444012.7A CN118052814B (en) | 2024-04-15 | 2024-04-15 | AI technology-based full-automatic specimen pretreatment system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118052814A CN118052814A (en) | 2024-05-17 |
CN118052814B true CN118052814B (en) | 2024-06-14 |
Family
ID=91046787
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410444012.7A Active CN118052814B (en) | 2024-04-15 | 2024-04-15 | AI technology-based full-automatic specimen pretreatment system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118052814B (en) |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112784767A (en) * | 2021-01-27 | 2021-05-11 | 天津理工大学 | Cell example segmentation algorithm based on leukocyte microscopic image |
CN114972756B (en) * | 2022-05-30 | 2024-07-19 | 湖南大学 | Semantic segmentation method and device for medical image |
CN115984231A (en) * | 2023-01-10 | 2023-04-18 | 厦门大学附属翔安医院 | Method for distinguishing cancer patient specimen properties based on optical image and application |
-
2024
- 2024-04-15 CN CN202410444012.7A patent/CN118052814B/en active Active
Non-Patent Citations (2)
Title |
---|
Triplet interactive attention network for cross-modality person re-identification;Chenrui Zhang et al.;《Pattern Recognition Letters》;20211013;全文 * |
基于三重交互关注网络的医学图像分割算法;高程玲;《模式识别与人工智能》;20210531;第第34卷卷(第第5期期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN118052814A (en) | 2024-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110647875B (en) | Method for segmenting and identifying model structure of blood cells and blood cell identification method | |
CN109815785A (en) | A kind of face Emotion identification method based on double-current convolutional neural networks | |
CN112508955B (en) | Method for detecting living cell morphology based on deep neural network and related product | |
CN106204642B (en) | A kind of cell tracker method based on deep neural network | |
CN110796661B (en) | Fungal microscopic image segmentation detection method and system based on convolutional neural network | |
EP3006551B1 (en) | Image processing device, image processing method, program, and storage medium | |
CN109948429A (en) | Image analysis method, device, electronic equipment and computer-readable medium | |
CN113052295B (en) | Training method of neural network, object detection method, device and equipment | |
CN112784767A (en) | Cell example segmentation algorithm based on leukocyte microscopic image | |
US12051253B2 (en) | Method and apparatus for training a neural network classifier to classify an image depicting one or more objects of a biological sample | |
CN115019103A (en) | Small sample target detection method based on coordinate attention group optimization | |
CN114782948B (en) | Global interpretation method and system for cervical fluid-based cytological smear | |
CN115359264A (en) | Intensive distribution adhesion cell deep learning identification method | |
CN118052814B (en) | AI technology-based full-automatic specimen pretreatment system and method | |
Ekman et al. | Task based semantic segmentation of soft X-ray CT images using 3D convolutional neural networks | |
CN114913523B (en) | Yolox-based multifunctional real-time intelligent plant stomata recognition system | |
CN111882521A (en) | Image processing method of cell smear | |
CN113505784B (en) | Automatic nail labeling analysis method and device, electronic equipment and storage medium | |
CN113850762A (en) | Eye disease identification method, device, equipment and storage medium based on anterior segment image | |
CN118072115B (en) | Medical cell detection method and system | |
CN117496276B (en) | Lung cancer cell morphology analysis and identification method and computer readable storage medium | |
Liu et al. | Focus on Key Features: A Computer-Aided System for Airborne Allergenic Pollen Recognition Based on Localization-before-Classification | |
CN118351100A (en) | Image definition detection and processing method based on deep learning and gradient analysis | |
CN118967474A (en) | Defect image enhancement method based on traditional data enhancement | |
CN114820510A (en) | Cytopathology image quality evaluation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |