CN111223157A - Ultrasonic CT sound velocity imaging method based on depth residual error network - Google Patents

Ultrasonic CT sound velocity imaging method based on depth residual error network Download PDF

Info

Publication number
CN111223157A
CN111223157A CN201911372315.8A CN201911372315A CN111223157A CN 111223157 A CN111223157 A CN 111223157A CN 201911372315 A CN201911372315 A CN 201911372315A CN 111223157 A CN111223157 A CN 111223157A
Authority
CN
China
Prior art keywords
ultrasonic
sound velocity
model
residual error
error network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911372315.8A
Other languages
Chinese (zh)
Inventor
屈晓磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Erxiang Foil Technology Co Ltd
Original Assignee
Suzhou Erxiang Foil Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Erxiang Foil Technology Co Ltd filed Critical Suzhou Erxiang Foil Technology Co Ltd
Priority to CN201911372315.8A priority Critical patent/CN111223157A/en
Publication of CN111223157A publication Critical patent/CN111223157A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/06Visualisation of the interior, e.g. acoustic microscopy
    • G01N29/0654Imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Pathology (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses an ultrasonic CT sound velocity imaging method based on a depth residual error network, belonging to the technical field of ultrasonic tomography in biomedical ultrasonics, and comprising the following steps: s1: immersing the target test object in a water tank equipped with an annular ultrasonic transducer, S2: respectively extracting signal initial points in two groups of original ultrasonic data, and correspondingly subtracting to obtain a transit time difference diagram; s3: constructing a depth residual error network model, S4: making training data and labels, inputting the training data and the labels into a network, training a model, and storing the model, S5: the method utilizes the depth residual error network in the deep learning to reconstruct the projection data obtained by the acquisition equipment, improves the precision of the reconstruction result and avoids the complex steps of multiple iterations, regularization parameter adjustment and the like.

Description

Ultrasonic CT sound velocity imaging method based on depth residual error network
Technical Field
The invention relates to the technical field of ultrasonic tomography in biomedical ultrasonography, in particular to an ultrasonic CT sound velocity imaging method based on a depth residual error network.
Background
Breast cancer has become the most common cancer in women worldwide, and its incidence rate is on the rise, so effective screening and diagnosis is very important. The ultrasonic CT sound velocity imaging technology has great advantages in the aspect of early diagnosis of the breast cancer due to the advantages of no radiation, low price, capability of generating quantitative three-dimensional images and the like. The technology takes an annular ultrasonic transducer array as a measuring device, detects the structure of a target object in a mode of transmitting and receiving ultrasonic waves among transducers, and reconstructs an ultrasonic CT sound velocity image by using information contained in ultrasonic signals received by the transducers.
At present, there are two main types of reconstruction methods in the field of ultrasound CT acoustic velocity imaging: full wave equation based methods and ray tracing based methods. Methods based on full-wave equations can provide better image quality, but their procedures are complex and unstable, and are computationally expensive. In contrast, the ray tracing based method is simpler and more stable, and can be used as an initial condition for the method iteration of the full wave equation. There are two representative methods in ray tracing based methods: a joint algebraic reconstruction method (SART) and a gihonov regularized reconstruction method. The combined algebra reconstruction method (SART) has simple principle and small calculated amount, but has poor noise resistance and inaccurate reconstruction result; the Gihonov regularization reconstruction method has high requirements for selecting regularization parameters, the reconstruction result is often too smooth, some key information is easily lost, and the overall accuracy is low.
Disclosure of Invention
The invention aims to provide an ultrasonic CT sound velocity imaging method based on a depth residual error network, which aims to solve the problems that how to realize an end-to-end mode is provided in the background technology, complicated steps such as adjusting iteration times and regularization parameters in part of traditional methods are avoided, a reconstruction method has stronger anti-noise capability, and higher precision is obtained.
In order to achieve the purpose, the invention provides the following technical scheme: an ultrasonic CT sound velocity imaging method based on a depth residual error network comprises the following steps:
s1: immersing a target detection object into a water tank provided with an annular ultrasonic transducer, controlling the transducers to sequentially emit ultrasonic waves, simultaneously receiving ultrasonic signals by the other transducers to obtain a first group of original data, taking out the detection object, controlling the transducers to sequentially emit the ultrasonic waves again, simultaneously receiving the ultrasonic signals by the other transducers to obtain a second group of original data, namely receiving data when a medium is pure water;
s2: respectively extracting signal initial points in two groups of original ultrasonic data, and correspondingly subtracting to obtain a transit time difference diagram;
s3: constructing a depth residual error network model;
s4: making training data and labels, inputting the training data and the labels into a network, training a model, and storing the model;
s5: and inputting the transit time difference map into the trained model to obtain a sound velocity image output by the network model.
Preferably, the kinds of the target detection objects in step S1 are not less than three groups, and the number of the measurement contrast experiments for each group of the target detection objects is not less than three.
Preferably, the residual network in step S3 includes nine residual units, one upsampling layer, one downsampling layer, and two general convolutional layers.
Preferably, the process of creating the training data in step S4 includes the following steps:
s41: acquiring a plurality of nuclear magnetic resonance images;
s42: dividing a plurality of acquired nuclear magnetic resonance images into image partitions, and dividing sound velocity regions according to the dividing result;
s43: solving the divided sound velocity region by using a finite element solution to obtain a simulation signal;
s44: extracting a signal initial arrival point of the simulation signal to obtain a transit time, and then subtracting the transit time corresponding to the sound velocity region of the pure water to obtain a transit time difference diagram of the simulation signal;
s45: and after the four steps are completed, taking the transition time difference graph of the simulation signal as the input of a training set, taking the corresponding sound velocity region as a label corresponding to the training set, and then inputting the training set into a model for training. And in the training process, updating parameters in the model by utilizing back propagation to obtain a globally optimal parameter model, and storing the model.
Compared with the prior art, the invention has the beneficial effects that:
1) according to the ultrasonic CT imaging method based on the depth residual error network, after a model is trained for one time, an end-to-end imaging mode can be realized, regularization parameters and iteration times do not need to be adjusted for each imaging, and the distribution closer to the real sound velocity is obtained through the strong fitting capacity of deep learning;
2) the method utilizes the depth residual error network in the deep learning to reconstruct the projection data obtained by the acquisition equipment, improves the precision of the reconstruction result, avoids the complicated steps of multiple iteration, regularization parameter adjustment and the like, has strong fitting capability in the deep learning, and can automatically learn the relationship between the fitting transit time difference graph and the sound velocity image, thereby being expected to obtain stronger anti-noise capability and being closer to the real sound velocity distribution.
Drawings
FIG. 1 is a schematic view of an annular transducer of the present invention;
FIG. 2 is a schematic diagram of the transit time difference of the present invention;
FIG. 3 is a schematic diagram of a residual error network model according to the present invention;
FIG. 4 is a schematic diagram of a residual error unit according to the present invention;
FIG. 5 is a graph of the sound velocity results obtained from segmentation of a magnetic resonance image according to the present invention;
FIG. 6 is a schematic diagram of a simulation signal obtained by a finite element method according to the present invention;
FIG. 7 is a sound velocity image output by the network model of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "upper", "lower", "inner", "outer", "top/bottom", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplification of description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "disposed," "sleeved/connected," "connected," and the like are to be construed broadly, e.g., "connected," which may be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Referring to fig. 1-7, the present invention provides a technical solution: an ultrasonic CT sound velocity imaging method based on a depth residual error network comprises the following steps:
s1: immersing a target detection object into a water tank configured with an annular ultrasonic transducer, controlling the transducers to sequentially emit ultrasonic waves, receiving ultrasonic signals by the other transducers to obtain a first group of original data, taking out the detection object, controlling the transducers to sequentially emit the ultrasonic waves again, receiving the ultrasonic signals by the other transducers to obtain a second group of original data, namely receiving data when a medium is pure water, wherein the types of the target detection object are not less than three groups, and the number of times of measurement comparison experiments of each group of target detection object is not less than three times, so that repeated comparison experiments are carried out, and the influence of burst variables on the accuracy of experiment results is avoided, wherein the schematic diagram of the annular transducer is shown in fig. 1;
s2: respectively extracting signal initial points in two groups of original ultrasonic data, and correspondingly subtracting to obtain a transit time difference diagram, wherein the result is shown in fig. 2;
s3: constructing a depth residual error network model, wherein the residual error network consists of nine residual error units, an up-sampling layer, a down-sampling layer and two common convolutional layers, the structure of the residual error network model is shown in figure 3, and the structure of the residual error unit is shown in figure 4;
s4: making training data and labels, inputting the training data and the labels into a network, training a model, and storing the model, wherein the making process of the training data comprises the following steps:
s41: acquiring a plurality of nuclear magnetic resonance images;
s42: the result of image segmentation of the sound velocity region obtained by nuclear magnetic resonance image segmentation is shown in fig. 5 by performing image segmentation on the obtained multiple nuclear magnetic resonance images and dividing the sound velocity region according to the segmentation result;
s43: solving the divided sound velocity region by using a finite element solution to obtain a simulation signal, wherein the simulation signal obtained by using the finite element solution is shown in FIG. 6;
s44: extracting a signal initial arrival point of the simulation signal to obtain a transit time, and then subtracting the transit time corresponding to the sound velocity region of the pure water to obtain a transit time difference diagram of the simulation signal;
s45: and after the four steps are completed, taking the transition time difference graph of the simulation signal as the input of a training set, taking the corresponding sound velocity region as a label corresponding to the training set, and then inputting the training set into a model for training. In the training process, parameters in the model are updated by utilizing back propagation to obtain a globally optimal parameter model, and the model is stored;
s5: and inputting the transit time difference graph into the trained model, and acquiring a sound velocity image output by the network model, wherein the acquired sound velocity image output by the network model is shown in fig. 7, wherein a is input of the depth residual error network, b is ideal output, and c is actual output of the network.
The end-to-end sound velocity imaging can be realized by using a deep learning method according to the result, so that the complicated steps of regularization parameter adjustment, iteration frequency adjustment and the like are avoided each time, the imaging result is displayed to obtain stronger anti-noise capability, and the imaging result is closer to the real sound velocity distribution
While there have been shown and described the fundamental principles and essential features of the invention and advantages thereof, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing exemplary embodiments, but is capable of other specific forms without departing from the spirit or essential characteristics thereof; the present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein, and any reference signs in the claims are not intended to be construed as limiting the claim concerned.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (4)

1. An ultrasonic CT sound velocity imaging method based on a depth residual error network is characterized in that: the ultrasonic CT sound velocity imaging method based on the depth residual error network comprises the following steps:
s1: immersing a target detection object into a water tank provided with an annular ultrasonic transducer, controlling the transducers to sequentially emit ultrasonic waves, simultaneously receiving ultrasonic signals by the other transducers to obtain a first group of original data, taking out the detection object, controlling the transducers to sequentially emit the ultrasonic waves again, simultaneously receiving the ultrasonic signals by the other transducers to obtain a second group of original data, namely receiving data when a medium is pure water;
s2: respectively extracting signal initial points in two groups of original ultrasonic data, and correspondingly subtracting to obtain a transit time difference diagram;
s3: constructing a depth residual error network model;
s4: making training data and labels, inputting the training data and the labels into a network, training a model, and storing the model;
s5: and inputting the transit time difference map into the trained model to obtain a sound velocity image output by the network model.
2. The ultrasonic CT sound velocity imaging method based on the depth residual error network according to claim 1, characterized in that: the types of the target detection objects in the step S1 are not less than three groups, and the number of measurement contrast experiments for each group of target detection objects is not less than three.
3. The ultrasonic CT sound velocity imaging method based on the depth residual error network according to claim 1, characterized in that: in step S3, the residual network is composed of nine residual units, one upsampling layer, one downsampling layer, and two ordinary convolutional layers.
4. The ultrasonic CT sound velocity imaging method based on the depth residual error network according to claim 1, characterized in that: the process of creating the training data in step S4 includes the following steps:
s41: acquiring a plurality of nuclear magnetic resonance images;
s42: dividing a plurality of acquired nuclear magnetic resonance images into image partitions, and dividing sound velocity regions according to the dividing result;
s43: solving the divided sound velocity region by using a finite element solution to obtain a simulation signal;
s44: extracting a signal initial arrival point of the simulation signal to obtain a transit time, and then subtracting the transit time corresponding to the sound velocity region of the pure water to obtain a transit time difference diagram of the simulation signal;
s45: and after the four steps are completed, taking the transition time difference graph of the simulation signal as the input of a training set, taking the corresponding sound velocity region as a label corresponding to the training set, and then inputting the training set into a model for training. And in the training process, updating parameters in the model by utilizing back propagation to obtain a globally optimal parameter model, and storing the model.
CN201911372315.8A 2019-12-27 2019-12-27 Ultrasonic CT sound velocity imaging method based on depth residual error network Withdrawn CN111223157A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911372315.8A CN111223157A (en) 2019-12-27 2019-12-27 Ultrasonic CT sound velocity imaging method based on depth residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911372315.8A CN111223157A (en) 2019-12-27 2019-12-27 Ultrasonic CT sound velocity imaging method based on depth residual error network

Publications (1)

Publication Number Publication Date
CN111223157A true CN111223157A (en) 2020-06-02

Family

ID=70826675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911372315.8A Withdrawn CN111223157A (en) 2019-12-27 2019-12-27 Ultrasonic CT sound velocity imaging method based on depth residual error network

Country Status (1)

Country Link
CN (1) CN111223157A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112674794A (en) * 2020-12-21 2021-04-20 苏州二向箔科技有限公司 Ultrasonic CT sound velocity reconstruction method combining deep learning and Gihonov regularization inversion
CN116858943A (en) * 2023-02-03 2023-10-10 台州五标机械股份有限公司 Hollow shaft intelligent preparation method and system for new energy automobile

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112674794A (en) * 2020-12-21 2021-04-20 苏州二向箔科技有限公司 Ultrasonic CT sound velocity reconstruction method combining deep learning and Gihonov regularization inversion
CN112674794B (en) * 2020-12-21 2023-02-10 苏州二向箔科技有限公司 Ultrasonic CT sound velocity reconstruction method combining deep learning and Gihonov regularization inversion
CN116858943A (en) * 2023-02-03 2023-10-10 台州五标机械股份有限公司 Hollow shaft intelligent preparation method and system for new energy automobile

Similar Documents

Publication Publication Date Title
US20240225607A1 (en) Ultrasound system with a neural network for producing images from undersampled ultrasound data
CN110074813B (en) Ultrasonic image reconstruction method and system
CN110772281B (en) Ultrasonic CT imaging system based on improved ray tracing method
Lafci et al. Deep learning for automatic segmentation of hybrid optoacoustic ultrasound (OPUS) images
CN106794007A (en) Network ultrasonic image-forming system
WO2020206755A1 (en) Ray theory-based method and system for ultrasound ct image reconstruction
US10345132B2 (en) Multi-plane method for three-dimensional particle image velocimetry
CN102423264A (en) Image-based biological tissue elasticity measuring method and device
CN109875606B (en) Ultrasonic CT sound velocity imaging method based on prior reflection imaging
EP3754558A1 (en) Method and system for generating a synthetic elastrography image
Perez-Liva et al. Speed of sound ultrasound transmission tomography image reconstruction based on Bézier curves
CN111956180B (en) Method for reconstructing photoacoustic endoscopic tomographic image
CN111223157A (en) Ultrasonic CT sound velocity imaging method based on depth residual error network
Yuan et al. Optimization of reconstruction time of ultrasound computed tomography with a piecewise homogeneous region-based refract-ray model
Li et al. Fast marching method to correct for refraction in ultrasound computed tomography
Pavlov et al. Towards in-vivo ultrasound-histology: Plane-waves and generative adversarial networks for pixel-wise speed of sound reconstruction
US20230186477A1 (en) System and methods for segmenting images
JP2023067357A (en) Inference device, medical image diagnostic apparatus, inference method, and trained neural network generation method
Jeong et al. Investigating the use of traveltime and reflection tomography for deep learning-based sound-speed estimation in ultrasound computed tomography
Amadou et al. Cardiac ultrasound simulation for autonomous ultrasound navigation
JP5959880B2 (en) Ultrasonic diagnostic equipment
CN114947938A (en) Ultrasound imaging system and method for low resolution background volume acquisition
Li et al. A learning-based method for compensating 3D-2D model mismatch in ring-array ultrasound computed tomography
Jush et al. AutoSpeed: A Linked Autoencoder Approach for Pulse-Echo Speed-of-Sound Imaging for Medical Ultrasound
CN112674794B (en) Ultrasonic CT sound velocity reconstruction method combining deep learning and Gihonov regularization inversion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200602