CN117809030A - Breast cancer CT image identification and segmentation method based on artificial neural network - Google Patents

Breast cancer CT image identification and segmentation method based on artificial neural network Download PDF

Info

Publication number
CN117809030A
CN117809030A CN202311776787.6A CN202311776787A CN117809030A CN 117809030 A CN117809030 A CN 117809030A CN 202311776787 A CN202311776787 A CN 202311776787A CN 117809030 A CN117809030 A CN 117809030A
Authority
CN
China
Prior art keywords
segmentation
breast cancer
image
neural network
artificial neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311776787.6A
Other languages
Chinese (zh)
Inventor
张飒飒
赵峰榕
刘岩松
安逸飞
徐荣琪
李万湖
郑营营
郭克刚
郭庆泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202311776787.6A priority Critical patent/CN117809030A/en
Publication of CN117809030A publication Critical patent/CN117809030A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of medical image processing, and particularly relates to a method for identifying and dividing a lesion area in a breast cancer CT image. A breast cancer CT image identification and segmentation method based on an artificial neural network comprises the following steps: s1: CT images of breast cancer patients are acquired and preprocessed; s2, inputting the preprocessed CT image into a pre-trained SAM model to obtain a pre-segmentation result; and S3, splicing the pre-segmentation result with the images before segmentation, and inputting the spliced images as a data set into a pre-trained U-Net segmentation model to obtain a mask map of the lesion area. According to the method, the mask map of the lesion area in the CT image of the breast cancer patient is obtained by adopting the artificial neural network model, and the method has the advantages of smooth area segmentation, high segmentation accuracy, high segmentation speed and the like.

Description

Breast cancer CT image identification and segmentation method based on artificial neural network
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to a method for identifying and dividing a lesion area in a breast cancer CT image.
Background
Breast cancer is one of the most common cancers today. It was counted that 230 ten thousand women diagnosed with breast cancer worldwide in 2020, accounting for about 1/5 of all cancer diagnosed cases, 68.5 of which die. By the year 2020, over the last five years, a total of 780 ten thousand women were diagnosed with breast cancer in the world. The latest cancer report in china in 2019 was published: breast cancer is the first cancer onset in women in China, and is one of the most serious malignant tumors affecting the health of women. Therefore, it is particularly important to study diagnosis and treatment of breast cancer.
Over the past several decades, various medical imaging techniques have been widely used for early detection, diagnosis and treatment of cancer, and Computed Tomography (CT) is one of the most important medical imaging detection techniques for breast cancer diagnosis. In order to diagnose whether a patient has cancer and to carry out subsequent treatments, it is necessary to accurately delineate the lesion area in the medical image. Clinically, interpretation and delineation of medical images is mainly done by experts such as imaging doctors, but the process is very time consuming and highly dependent on the experience of the doctor.
In recent years, with the development of artificial intelligence technology, deep learning algorithms play an increasingly important role in the field of intelligent medical treatment, and domestic and foreign experts develop a great deal of research on auxiliary diagnosis of breast cancer based on artificial neural networks. The computer-aided technology is used for replacing manual reading of medical images, identification and marking of lesion areas are carried out, diagnosis and treatment of breast cancer can be assisted by doctors, and the method has important significance.
Therefore, the depth fusion of the artificial neural network and the breast cancer CT image is a direction worthy of research in the future.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, the invention provides a breast cancer CT image identification and segmentation method based on an artificial neural network, which realizes identification and segmentation of lesion areas (breast tumors and axillary lymph nodes) in the breast cancer CT image. The method has the advantages of smooth region segmentation, high segmentation accuracy, high segmentation speed and the like.
In order to solve the technical problems of the invention, the invention adopts the following technical scheme: a breast cancer CT image identification and segmentation method based on an artificial neural network comprises the following steps:
s1, acquiring CT images of breast cancer patients, and preprocessing;
s2, inputting the preprocessed CT image into a pre-trained SAM model to obtain a pre-segmentation result;
and S3, splicing the pre-segmentation result with the images before segmentation, and inputting the spliced images as a data set into a pre-trained U-Net segmentation model to obtain a mask map of the lesion area.
Further, in the step S1, the specific method of the pretreatment is as follows: and converting the breast cancer CT image of the patient from an NRRD format to a PNG format, and converting the PNG format to an RGB three-channel NPY format.
Further, the specific method of S3 is as follows: and splicing the pre-segmentation result with the CT image in the NPY format of the RGB three channels to serve as a fourth channel except R, G, B.
Specifically, the size of the original three-channel NPY format CT image is nxwxhx3, the size of the spliced image is nxwxhx4, where N represents the number of images, W represents the width of the image, H represents the height of the image, 3 represents the depth of each pixel, i.e., R, G, B three channels, and 4 represents the depth of each pixel, i.e., R, G, B and four channels as a result of pre-segmentation. The spliced image contains original image information and pre-segmentation information. And finally, inputting the spliced data into an accurate segmentation network to obtain a segmentation Mask of the background and the lesion area.
The invention uses the artificial neural network to realize the identification and segmentation of breast tumors, axillary lymph nodes and background areas in the CT image of breast cancer, and has the advantages of smooth segmentation area, high segmentation accuracy, high segmentation speed and the like. The neural network comprises a pre-segmentation part and an accurate segmentation part, global features can be captured in a preliminary stage by using SAM model pre-segmentation, and the pre-segmentation result and the original image are spliced to provide rich auxiliary information for the follow-up accurate segmentation model, so that the capturing capability of the accurate segmentation model on target boundaries, details and contexts is improved, and the accuracy of the accurate segmentation result is improved. The mask map obtained by final segmentation can be used as an important reference for assisting doctors in diagnosis and developing subsequent treatment, so that the workload and working time of image doctors are greatly reduced, marking deviation caused by insufficient experience of partial doctors can be reduced to a certain extent, and the accuracy of diagnosis is improved.
Drawings
FIG. 1 is a flow chart of a breast cancer CT image identification and segmentation method based on an artificial neural network;
fig. 2 is a schematic diagram of an algorithm structure for breast cancer CT image recognition and segmentation.
Detailed Description
In order that the invention may be readily understood, a more particular description thereof will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. These embodiments are provided so that this disclosure will be thorough and complete.
The embodiment provides a breast cancer CT image recognition and segmentation method based on an artificial neural network, and the flow is shown in fig. 1, and the method comprises the following specific steps:
1. data preprocessing
1. And (3) acquiring a breast cancer CT image in an NRRD format, and delineating and marking two lesion areas, namely a breast tumor and an axillary lymph node by using a Slicer software.
2. Converting the breast cancer CT image in the NRRD format into a PNG format, converting the PNG format into an NPY format of RGB three channels, and respectively storing the NPY format. The size of the three-channel NPY file is NxW xH x 3, wherein N is the number of images, H is the height of the images, W is the width of the images, and 3 is the depth (channel number) of the images.
3. And converting the NRRD file marked with the lesion area into an NPY file serving as a target (label) of the training model.
2. Breast cancer CT image recognition and segmentation model construction and training
1. And inputting the PNG-format breast cancer CT image into a pre-trained SAM model for segmentation, and storing the obtained segmentation result as an NPY file as a pre-segmentation result. The pre-segmentation result is a segmentation result diagram with the same size as the original image, wherein each pixel is allocated a predicted class label. The pre-segmentation result has a size of nxwxhx1, where 1 represents one predicted class label to which each pixel is assigned.
2. And splicing the pre-segmentation result with the CT image in the NPY format of the RGB three channels to serve as a fourth channel except R, G, B. Specifically, the CT image of the original three-channel NPY format with a size of nxw×h×3 is spliced with the pre-segmentation result with a size of nxw×h×1, and the spliced image has a size of nxw×h×4. And taking the NPY file obtained after splicing as a data set input to the U-Net.
3. The U-Net is composed of two parts, namely an encoder and a decoder. The encoder is composed of four downsampling modules, each downsampling module is composed of two 3×3 convolution layers and a 2×2 pooling layer; the decoder consists of four upsampling modules and a 1 x 1 convolutional layer, each upsampling module consisting of a bilinear interpolation, a feature concatenation and two 3 x 3 convolutional layers. The convolutional layer in each downsampling and upsampling module is followed by a Relu activation function:
Relu=max(0,x)
where x is the data input to the activation function.
4. And inputting the NPY file obtained after the splicing to the U-Net for training, and verifying and testing. If the requirements are not met, the training parameters are adjusted to continue training the model, and verification and test are carried out until the breast cancer CT image identification and segmentation model meeting the requirements is obtained. The loss function L in the model training process adopts a cross entropy loss function, and the calculation method of the cross entropy loss function is as follows:
where M is the number of categories, in this example, M equals 3 (breast mass, axillary lymph node, background area); y is ic As a sign function (0 or 1), y is the true class of sample i is equal to c ic Taking 1, otherwise taking 0; p is p ic The predicted probability that sample i belongs to category c is observed.
3. Identification and segmentation of breast cancer CT images
1. And converting the acquired CT image of the breast cancer patient into PNG format and RGB three-channel NPY format.
2. And inputting the CT image in the PNG format into a SAM model for segmentation, and storing the obtained segmentation result as an NPY file as a pre-segmentation result.
3. And taking the pre-segmentation result as a fourth channel except R, G, B, splicing the pre-segmentation NPY file and the RGB NPY file, and inputting the pre-segmentation NPY file into a pre-trained U-Net network model to obtain segmentation Mask sketching results of breast tumors, axillary lymph nodes and background areas.
4. Thus, the identification and segmentation of the lesion area of the CT image of the breast cancer are completed.

Claims (3)

1. The breast cancer CT image identification and segmentation method based on the artificial neural network is characterized by comprising the following steps of:
s1, acquiring CT images of breast cancer patients, and preprocessing;
s2, inputting the preprocessed CT image into a pre-trained SAM model to obtain a pre-segmentation result;
and S3, splicing the pre-segmentation result with the images before segmentation, and inputting the spliced images as a data set into a pre-trained U-Net segmentation model to obtain a mask map of the lesion area.
2. The breast cancer CT image recognition and segmentation method based on the artificial neural network according to claim 1, wherein the method comprises the following steps: in the step S1, the specific method for preprocessing is as follows: and converting the breast cancer CT image of the patient from an NRRD format to a PNG format, and converting the PNG format to an RGB three-channel NPY format.
3. The breast cancer CT image recognition and segmentation method based on artificial neural network according to claim 1 or 2, characterized in that: the specific method of the S3 is as follows: and splicing the pre-segmentation result with the CT image in the NPY format of the RGB three channels to serve as a fourth channel except R, G, B, and inputting the spliced data into an accurate segmentation network to obtain segmentation masks of the background and the lesion area.
CN202311776787.6A 2023-12-22 2023-12-22 Breast cancer CT image identification and segmentation method based on artificial neural network Pending CN117809030A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311776787.6A CN117809030A (en) 2023-12-22 2023-12-22 Breast cancer CT image identification and segmentation method based on artificial neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311776787.6A CN117809030A (en) 2023-12-22 2023-12-22 Breast cancer CT image identification and segmentation method based on artificial neural network

Publications (1)

Publication Number Publication Date
CN117809030A true CN117809030A (en) 2024-04-02

Family

ID=90424454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311776787.6A Pending CN117809030A (en) 2023-12-22 2023-12-22 Breast cancer CT image identification and segmentation method based on artificial neural network

Country Status (1)

Country Link
CN (1) CN117809030A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118587443A (en) * 2024-08-07 2024-09-03 之江实验室 Image segmentation method and device based on self-training and priori guidance

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118587443A (en) * 2024-08-07 2024-09-03 之江实验室 Image segmentation method and device based on self-training and priori guidance

Similar Documents

Publication Publication Date Title
CN113674253B (en) Automatic segmentation method for rectal cancer CT image based on U-transducer
CN111091589A (en) Ultrasonic and nuclear magnetic image registration method and device based on multi-scale supervised learning
CN111882560B (en) Lung parenchyma CT image segmentation method based on weighted full convolution neural network
CN113034505B (en) Glandular cell image segmentation method and glandular cell image segmentation device based on edge perception network
CN113223005B (en) Thyroid nodule automatic segmentation and grading intelligent system
CN112365464A (en) GAN-based medical image lesion area weak supervision positioning method
CN114092439A (en) Multi-organ instance segmentation method and system
CN110689525A (en) Method and device for recognizing lymph nodes based on neural network
CN112884788B (en) Cup optic disk segmentation method and imaging method based on rich context network
CN110766670A (en) Mammary gland molybdenum target image tumor localization algorithm based on deep convolutional neural network
CN115512110A (en) Medical image tumor segmentation method related to cross-modal attention mechanism
CN114202545A (en) UNet + + based low-grade glioma image segmentation method
CN109214388B (en) Tumor segmentation method and device based on personalized fusion network
CN117809030A (en) Breast cancer CT image identification and segmentation method based on artificial neural network
CN112927237A (en) Honeycomb lung focus segmentation method based on improved SCB-Unet network
CN116524315A (en) Mask R-CNN-based lung cancer pathological tissue section identification and segmentation method
CN110992309A (en) Fundus image segmentation method based on deep information transfer network
CN114581474A (en) Automatic clinical target area delineation method based on cervical cancer CT image
CN111210398A (en) White blood cell recognition system based on multi-scale pooling
CN118512278A (en) AI modeling method and device used before tooth 3D printing
CN114612478B (en) Female pelvic cavity MRI automatic sketching system based on deep learning
CN113487579B (en) Multi-mode migration method for automatically sketching model
Nasim et al. Review on multimodality of different medical image fusion techniques
CN115526898A (en) Medical image segmentation method
CN114882282A (en) Neural network prediction method for colorectal cancer treatment effect based on MRI and CT images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination