CN112331049A - Ultrasonic simulation training method and device, storage medium and ultrasonic equipment - Google Patents
Ultrasonic simulation training method and device, storage medium and ultrasonic equipment Download PDFInfo
- Publication number
- CN112331049A CN112331049A CN202011220774.7A CN202011220774A CN112331049A CN 112331049 A CN112331049 A CN 112331049A CN 202011220774 A CN202011220774 A CN 202011220774A CN 112331049 A CN112331049 A CN 112331049A
- Authority
- CN
- China
- Prior art keywords
- ultrasonic
- ultrasonic probe
- probe
- training
- ultrasound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B23/00—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
- G09B23/28—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
- G09B23/286—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine for scanning or photography techniques, e.g. X-rays, ultrasonics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10076—4D tomography; Time-sequential 3D tomography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Medical Informatics (AREA)
- Radiology & Medical Imaging (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Quality & Reliability (AREA)
- Chemical & Material Sciences (AREA)
- Medicinal Chemistry (AREA)
- Algebra (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
The invention discloses an ultrasonic simulation training method, an ultrasonic simulation training device, a storage medium and ultrasonic equipment, wherein the method comprises the following steps: scanning the detection object by the ultrasonic probe; determining a scanning part scanned by the ultrasonic probe according to external input information or spatial position information of the ultrasonic probe relative to a detection object; determining and displaying an ultrasonic image of a scanned part according to the scanned part of the ultrasonic probe and the training model; and based on the type of the ultrasonic probe, generating a moving path of the ultrasonic probe according to the training model, and guiding the ultrasonic probe to perform moving scanning based on the moving path. By implementing the method and the device, the spatial information between the probe and the scanned part of the detection object is acquired, so that the ultrasonic image corresponding to the scanned part can be generated and displayed by the pre-trained training model, a user can intuitively know the association between the operation of the probe and the ultrasonic image, and the user can more conveniently train to acquire a high-quality ultrasonic image.
Description
Technical Field
The invention relates to the technical field of medical imaging, in particular to an ultrasonic simulation training method, an ultrasonic simulation training device, a storage medium and ultrasonic equipment.
Background
Ultrasonic diagnosis is a diagnostic method which applies ultrasonic detection technology to human body, knows the data form of human body physiology or tissue structure by measuring specific parameters, finds diseases and gives prompts. Ultrasonic diagnosis is highly dependent on operators, namely, operators must acquire accurate ultrasonic examination results through professional ultrasonic techniques and ultrasonic image knowledge. Therefore, excellent training of ultrasound operation is the basis for clinical application of ultrasound diagnostic techniques.
The current ultrasonic training course is divided into two parts of classroom theory explanation and clinical teaching. On one hand, the difference between theoretical explanation and actual operation is often large, so that a student cannot intuitively master the key points of the operation technique; on the other hand, clinical teaching is often limited by patients and operating environments, large-scale training cannot be performed, and students cannot perform ultrasonic operation on patients directly, so that the ultrasonic performance of typical diseases is difficult to observe by the existing clinical teaching. The above disadvantages all make the medical ultrasound training not ideal, resulting in the inability of the trainee to master the clinical ultrasound skills well.
Disclosure of Invention
In view of this, embodiments of the present invention provide an ultrasound simulation training method, an ultrasound simulation training device, a storage medium, and an ultrasound apparatus, so as to solve the technical problem that a trainee cannot well master clinical ultrasound skills in the existing ultrasound training.
The technical scheme provided by the invention is as follows:
the first aspect of the embodiments of the present invention provides an ultrasound simulation training method, including: scanning a detection object by adopting an ultrasonic probe, wherein the ultrasonic probe comprises a virtual ultrasonic probe or a real ultrasonic probe; determining a scanning part scanned by the ultrasonic probe according to external input information or the spatial position information of the ultrasonic probe relative to the detection object; determining an ultrasonic image of a scanned part according to the scanned part of the ultrasonic probe and a training model, displaying the ultrasonic image, and updating the training model according to an actual training condition; and generating a moving path of the ultrasonic probe according to the training model based on the type of the ultrasonic probe, and guiding the ultrasonic probe to perform moving scanning based on the moving path.
Optionally, the training model is obtained by pre-training according to the following method, and acquires an ultrasonic image of a detection object and relative spatial position information of a corresponding ultrasonic probe relative to the detection object, and inputs the ultrasonic image into a first convolution neural network for feature extraction to obtain feature image data; inquiring in a three-dimensional data model according to the characteristic image data and the relative spatial position information of the corresponding ultrasonic probe relative to the detection object, and judging whether existing characteristic image data exist at the corresponding position in the three-dimensional data model; when existing characteristic image data exist at corresponding positions in the three-dimensional data model, inputting the existing characteristic image data and the characteristic image data into a second convolutional neural network for fusion to obtain fused characteristic image data; and updating the three-dimensional data model according to the fusion characteristic image data to obtain the training model.
Optionally, the ultrasound simulation training method further includes: and when no existing characteristic image data exists at the corresponding position in the three-dimensional data model, updating the three-dimensional data model according to the characteristic image data and the relative spatial position information of the corresponding ultrasonic probe relative to the detection object to obtain the training model.
Optionally, the ultrasound simulation training method further includes: acquiring image data of a detection object according to CT scanning or MRI scanning; matching the image data with the characteristic images in the training model according to the matching model, and judging whether the detection object is missed; and when the scanning omission exists, sending out a scanning omission prompt.
Optionally, based on the type of the ultrasound probe, generating a moving path of the ultrasound probe according to the training model, and guiding the ultrasound probe to perform a mobile scanning based on the moving path, including: when the ultrasonic probe is a virtual ultrasonic probe, determining the spatial position of a target tangent plane according to the target tangent plane and a training model; and generating a moving path of the ultrasonic probe according to the current position information of the ultrasonic probe and the space position of the target tangent plane, and guiding the ultrasonic probe to perform moving scanning based on the moving path.
Optionally, based on the type of the ultrasound probe, a moving path of the ultrasound probe is generated according to the training model, and the ultrasound probe is guided to perform a mobile scanning based on the moving path, further including: when the ultrasonic probe is a real ultrasonic probe, inputting a current ultrasonic image of a scanned part scanned by the ultrasonic probe into a first convolution neural network for processing to obtain a current ultrasonic characteristic image; inputting the current ultrasonic characteristic image into a third convolutional neural network for simplification to obtain a current ultrasonic simplified characteristic image; determining the existing ultrasonic image at the corresponding spatial position in the training model according to the spatial position information of the ultrasonic probe relative to the scanned part and the training model; inputting the existing ultrasonic image into a third convolutional neural network for simplification to obtain an existing ultrasonic simplified image; fully connecting and processing the current ultrasonic simplified feature image and the existing ultrasonic simplified image to obtain a position difference value; and generating a moving path of the ultrasonic probe according to the position difference and the target tangent plane, and guiding the ultrasonic probe to perform moving scanning based on the moving path.
Optionally, the ultrasound simulation training method further includes: obtaining a standard tangent plane corresponding to the scanned part according to the training model at least based on the ultrasonic image of the scanned part of the detection object, and performing quality evaluation on the ultrasonic image based on the standard tangent plane; and/or evaluating an actual movement path of the ultrasound probe based on generating the movement path of the ultrasound probe.
A second aspect of an embodiment of the present invention provides an ultrasound simulation training apparatus, including: the scanning module is used for scanning the detection object by adopting an ultrasonic probe, and the ultrasonic probe comprises a virtual ultrasonic probe or a real ultrasonic probe; the part determining module is used for determining a scanning part scanned by the ultrasonic probe according to external input information or the spatial position information of the ultrasonic probe relative to the detection object; the image determining module is used for determining an ultrasonic image of a scanned part according to the scanned part of the ultrasonic probe and the training module, displaying the ultrasonic image, and updating the training model according to an actual training condition; and the path determining module is used for generating a moving path of the ultrasonic probe according to the training model based on the type of the ultrasonic probe and guiding the ultrasonic probe to perform moving scanning based on the moving path.
A third aspect of embodiments of the present invention provides a computer-readable storage medium storing computer instructions for causing a computer to perform the method for ultrasound simulation training according to any one of the first aspect and the first aspect of embodiments of the present invention.
A fourth aspect of the embodiments of the present invention provides an ultrasound apparatus, including: a memory and a processor, the memory and the processor being communicatively coupled, the memory storing computer instructions, and the processor executing the computer instructions to perform the method of ultrasound simulation training according to any of the first aspect and the first aspect of the embodiments of the present invention.
The technical scheme provided by the invention has the following effects:
according to the ultrasonic simulation training method, the ultrasonic simulation training device, the storage medium and the ultrasonic equipment provided by the embodiment of the invention, the spatial information between the probe and the scanned part of the detection object is acquired, so that the ultrasonic image corresponding to the scanned part can be generated and displayed by the pre-trained training model, a user can intuitively know the association between the operation of the probe and the ultrasonic image, and the user can more conveniently train to acquire a high-quality ultrasonic image. Meanwhile, the ultrasonic simulation training method can be used for training by adopting a real probe and can also be used for training by adopting a virtual probe, so that different training requirements are met. In addition, the training model can be updated in real time according to the actual training situation, and the use experience of the user is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of an ultrasound simulation training method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an ultrasound simulation training method according to another embodiment of the present invention;
FIG. 3 is a flow chart of generating a movement path according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a first convolutional neural network according to an embodiment of the present invention
FIG. 5 is a schematic diagram of a third convolutional neural network according to an embodiment of the present invention;
FIG. 6 is a guidance diagram for generating a movement path on a display according to an embodiment of the invention;
FIG. 7 is a flow chart of an ultrasound simulation training method according to another embodiment of the present invention;
FIG. 8 is a schematic diagram of a second convolutional neural network according to an embodiment of the present invention;
FIG. 9 is a flow chart of an ultrasound simulation training method according to another embodiment of the present invention;
FIG. 10 is a block diagram of an ultrasound simulation training apparatus according to an embodiment of the present invention;
FIG. 11 is a schematic structural diagram of a computer-readable storage medium provided in accordance with an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an ultrasound apparatus provided according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an ultrasonic simulation training method, which comprises the following steps as shown in figure 1:
s100: scanning a detection object by adopting an ultrasonic probe, wherein the ultrasonic probe comprises a virtual ultrasonic probe or a real ultrasonic probe; specifically, when the detection object is a prosthesis, the detection object can be scanned by using a virtual ultrasonic probe, and can also be scanned by using a real ultrasonic probe; when the test object is a human body or a real phantom, such as a real animal or a phantom used in medical simulation, or a part of a real animal, such as a certain tissue or organ; or a phantom of an organ or tissue; or a joint phantom of multiple tissues or organs, a true ultrasound probe may be employed. For example, the phantom can be used for female pregnant woman physical signs to perform ultrasonic detection of female gynecology, and for example, the phantom can be used for common normal adult males to perform ultrasonic detection of superficial organs, and at this time, a real ultrasonic probe can be used, or if training simulation is performed, a virtual ultrasonic probe can be used. Meanwhile, different parts can be matched with different types of ultrasonic probes, such as a linear array probe, a convex array probe, a phased array, an area array and the like.
S200: and determining the scanning part scanned by the ultrasonic probe according to external input information or the spatial position information of the ultrasonic probe relative to the detection object. Specifically, spatial position information of the ultrasound probe relative to the detection object may be acquired first, and the scanned part scanned by the ultrasound probe may be determined according to the spatial position information.
In an embodiment, position and/or angle information of the ultrasound probe relative to the test object may be identified using one or more sensors. The accuracy of the calculation may be improved when multiple sensors are employed, or more position or angle information may be measured. Wherein the sensor may be moving or stationary. The sensor type comprises one or the combination of any several of a visual sensor, a position sensor, a pressure sensor, an infrared sensor, a speed sensor, an acceleration sensor and a magnetic sensor. Wherein, the sensor can be arranged at any position of the phantom according to the practical application condition, such as the inner part of the phantom; or is separated from the body model and is arranged on a part connected with the body model module; alternatively still, the sensor may be attached to the phantom by means of a remote connection.
In an embodiment, a camera may be disposed outside the ultrasound probe for acquiring relative spatial position information of the ultrasound probe with respect to the object to be detected, and the camera may be a three-dimensional camera. The three-dimensional camera acquires the spatial position information of the ultrasonic probe and the spatial position information of the object to be detected, so that the relative spatial position information of the ultrasonic probe relative to the object to be detected is obtained.
In one embodiment, an Inertial sensor (IMU) is provided in the ultrasound probe, and may acquire real-time spatial position information of the ultrasound probe, for example, real-time coordinate information of X, Y, and Z axes of the ultrasound probe. And then the relative spatial position information of the ultrasonic probe relative to the object to be detected can be obtained by combining the spatial position information of the object to be detected, which is acquired by the camera. In addition, the relative spatial position information of the ultrasonic probe relative to the scanned part of the object to be detected can be judged in a mode of combining the magnetic sensor and the camera.
In an embodiment, at least 1 infrared transmitter can be respectively arranged at the positions of four corners on the shell of the ultrasonic probe and used for transmitting infrared rays, and simultaneously, infrared sensors are arranged on the phantom and outside the phantom and used for receiving the infrared rays transmitted by the infrared transmitters, and the transmitters can transmit the infrared rays to all directions. Therefore, the relative spatial position information of the ultrasonic probe relative to the object to be detected can be obtained according to the received infrared light.
In an embodiment, a flexible touch screen or a flexible touch layer may be disposed on the phantom, and a pressure sensor may be disposed on the flexible touch screen or the flexible touch layer, so as to identify position information of the ultrasound probe relative to the flexible touch screen or the flexible touch layer and pressure information exerted on the detection object, thereby determining relative spatial position information between the ultrasound probe relative to the detection object.
S300: and determining an ultrasonic image of the scanned part according to the scanned part of the ultrasonic probe and the training model, displaying the ultrasonic image, and updating the training model according to the actual training condition.
In an embodiment, the training model may be a pre-trained deep learning network model, and when the training model is trained, a real ultrasonic probe may be used to perform ultrasonic scanning on the detection object along a preset direction to obtain an ultrasonic image of each section of the detection object. The tissue scanned by the probe may be heart, kidney, liver, blood vessel, gallbladder, uterus, breast, fetus, thyroid, etc. And, can confirm the relative spatial position information of the ultrasonic probe relative to the detected object while obtaining each ultrasonic image, this relative spatial position information can be obtained through magnetic field generator or magnetic locator, can also obtain through the lens.
Meanwhile, in the training process, the ultrasonic image of each section and the corresponding relative spatial position information need to be acquired, and the acquired related information is input into the deep learning network for training, so that the required training model can be obtained.
And, the training model can be updated according to the actual training situation. For example, for the operations of puncturing and inner diameter measurement of blood vessels, various types of blood vessels, blood vessels of various ages, blood vessels of different genders and blood vessels of people can be obtained, and a training model of the blood vessels is established, so that the development of blood vessel training can be facilitated.
In one embodiment, the ultrasound image comprises a pure ultrasound image, an ultrasound video, an organ model; or at least one of measurement information, diagnosis information, organ information, attributes of the object to be detected and the like. The attribute information of the object to be detected may be: the attribute information of a phantom used by a real animal or medical simulation, for example, the object to be detected is a female, a male, an old, a child, a height, a weight, or the like. After the ultrasound image is acquired, the ultrasound image may also be displayed. Wherein, the ultrasonic image display comprises the simultaneous display of one of a two-dimensional ultrasonic image, a three-dimensional ultrasonic image, a two-dimensional ultrasonic video, a three-dimensional ultrasonic video and an organ model; the position information of the probe relative to the organ model, the position information of the ultrasonic image relative to the organ model and the time sequence information of the ultrasonic image in the ultrasonic video can be displayed for more intuitive display.
S400: and based on the type of the ultrasonic probe, generating a moving path of the ultrasonic probe according to the training model, and guiding the ultrasonic probe to perform moving scanning based on the moving path.
In one embodiment, according to the training model, guiding the ultrasound probe to perform a mobile scan based on a movement path includes:
when the ultrasonic probe is a virtual ultrasonic probe, determining the spatial position of a target section according to the target section and the training model; and generating a moving path of the ultrasonic probe according to the current position information and the spatial position of the ultrasonic probe, and guiding the ultrasonic probe to perform moving scanning based on the moving path. Specifically, the target position may be a standard section recommended intelligently, or a target section of a site input by a medical staff. After the target section is determined, the spatial position of the section can be determined according to the training model, and then the moving path can be determined according to the current position of the ultrasonic probe and the spatial position of the section.
In an embodiment, as shown in fig. 2 and 3, based on the type of the ultrasound probe, a moving path of the ultrasound probe is generated according to the training model, and the ultrasound probe is guided to perform a moving scan based on the moving path, further including:
s401: when the ultrasonic probe is a real ultrasonic probe, inputting a current ultrasonic image of a scanned part scanned by the ultrasonic probe into a first convolution neural network for processing to obtain a current ultrasonic characteristic image; in an embodiment, as shown in fig. 4, an input ultrasound image first enters a two-layer convolution + pooling module of a first convolution neural network for processing, where the convolution kernel size is 3 × 3 and the step size is 1, the number of convolution kernels increases by multiples of 32, the kernel size of the pooling layer is 2 × 2 and the step size is 2; then, the data are input into a bilinear interpolation and convolution module through two layers of convolution (convolution kernel is 3 multiplied by 3, step length is 1), wherein the bilinear interpolation and convolution module and a two-layer convolution and pooling module of a neural network can increase or decrease the number of modules according to the training test effect; the two-layer convolution connection can connect a bilinear interpolation and convolution module and a two-layer convolution and pooling module of the neural network and is used for enhancing feature extraction. The number of channels output by the bilinear interpolation and convolution module is an image after feature enhancement and extraction, and a ReLU activation function is added after convolution for relieving the problem of gradient disappearance. And a convolution layer is connected behind the previous pooling layer, the size of the convolution kernel is 1 multiplied by 1, the aim is to fuse and extract features, the nonlinearity is increased, the fitting capacity of the network is increased, and the part can be added with the former to be used as the input of next up-sampling, so that the capability of improving network classification is realized. And in the final bilinear interpolation and convolution module, performing convolution on the output channel number, and outputting the extracted characteristic image data with the same size as the input ultrasonic image.
S402: inputting the current ultrasonic characteristic image into a third convolutional neural network for simplification to obtain a current ultrasonic simplified characteristic image; specifically, the third convolutional neural network is used for processing the ultrasonic characteristic image, and simplifying the characteristic distribution in the input image. As shown in fig. 5, the network structure adopts three convolution kernels of 3 × 3 sizes to perform convolution calculation on the input feature image in the form of "SAME", so as to simplify redundant features in the input feature image data.
S403: determining the existing ultrasonic image at the corresponding spatial position in the training model according to the spatial position information of the ultrasonic probe relative to the scanned part and the training model; specifically, the existing ultrasound image at the corresponding spatial position in the training model can be determined by querying in the training model according to the spatial position information of the ultrasound probe relative to the scanned part.
S404: inputting the existing ultrasonic image into a third convolution neural network for simplification to obtain an existing ultrasonic simplified image; specifically, the existing ultrasound image can also be simplified by the third convolutional neural network, so as to obtain the existing ultrasound simplified image.
S405: fully connecting and processing the current ultrasonic simplified characteristic image and the existing ultrasonic simplified image to obtain a position difference value; specifically, the current ultrasonic simplified characteristic image and the existing ultrasonic simplified image are subjected to full connection processing, and the difference value M between the spatial position of the ultrasonic probe relative to the scanned part of the detection object and the corresponding spatial position of the ultrasonic probe in the training model is calculated through regression; the difference M is a difference between the spatial position (x1, y1, z1, ax1, ay1, az1) of the ultrasound probe at the scanning part of the detection object and the spatial position (x2, y2, z2, ax2, ay2, az2) of the ultrasound probe in the training model.
S406: and generating a moving path of the ultrasonic probe according to the position difference and the target tangent plane, and guiding the ultrasonic probe to perform moving scanning based on the moving path. Specifically, after the target tangent plane is determined, the spatial position of the tangent plane may be determined according to the training model, and then the moving path (Δ X, Δ Y, Δ Z, Δ AX, Δ AY, Δ AZ) of the ultrasound probe may be calculated based on the difference M and the spatial position information of the target tangent plane in the training model. Namely, after the difference value M is determined, the spatial position of the ultrasonic probe in the target tangent plane in the training model is determined (x3, y3, z3, ax3, ay3 and az3), so that the ultrasonic probe can determine the moving path only by stepping the ultrasonic probe by the step M and then stepping the ultrasonic probe by the steps (x 3-x 2, y 3-y 2, z 3-z 2, ax 3-ax 2, ay 3-ay 2 and az 3-az 2).
In an embodiment, when the target position is a standard section of the currently scanned part, the acquired ultrasound image may be matched with the standard section of the scanned part corresponding to the ultrasound image by using the training model, so as to generate a moving path of the probe when the probe moves to the standard section. When the target position is a standard tangent plane of the scanned part input by the medical staff, a moving path can be generated according to the ultrasonic image and the target position obtained by current scanning. When the target position is the standard tangent plane of the current scanned part, the position can be intelligently recommended according to the scanned part scanned by the ultrasonic probe at present.
In one embodiment, when the target position is a position input by a medical care worker, the medical care worker can input the position through an interactive device, wherein the interactive device comprises a keyboard, a mouse, a language sensor, a light sensor, a touch screen and the like; or, a medical person selects a site from the displayed sites; or, the medical staff may speak the voice input part, for example, the medical staff may say "scan the double apical paths of the fetus". Optionally, after scanning a scanned part of a detection object by using an ultrasonic probe, a user displays m ultrasonic image sections stored in the scanned part, wherein m is a positive integer, medical staff selects an ultrasonic image section of a target organ or tissue required by the medical staff from the m sections, and determines the selected ultrasonic image section as a target position; in actual implementation, after scanning the scanned part of the detection object, the medical staff may input the target position by voice, for example, when scanning a blood vessel, the medical staff may input "scan the cross section of the blood vessel".
In an embodiment, when the target position is the position recommended intelligently, after the scanning part is determined, the position scanned by the medical staff at a high probability when the scanning part is scanned can be determined according to the big data, and then the position is determined as the target position. In addition, during actual implementation, at least two target positions can be provided, and the first position in the moving direction of the ultrasonic probe can be determined as the target position according to the moving path of the ultrasonic probe; for example, when a medical staff operates the kidney, the medical staff usually operates 5 positions of A, B, C, D and E, the current position of the ultrasonic probe is between A and B, and the ultrasonic probe moves towards the position B according to the moving direction of the ultrasonic probe, so that the position B can be determined as the target position. Or determining the position closest to the current scanning position as the target position.
After the target position is determined, the moving path of the ultrasonic probe can be generated. Wherein the movement path comprises a movement in position and/or angle. For example, the moving path is 30 degrees of clockwise deflection or 30 degrees of counterclockwise deflection of the ultrasonic probe; a translation of 1cm to the left or 1cm to the right, etc.
In some embodiments, directing the ultrasound probe to move based on a movement path includes: methods based on visual, auditory or force feedback guide the ultrasound probe in movement. Specifically, the method based on visual guidance may guide the user by one or more of image guidance, video guidance, logo guidance, text guidance, light guidance, and projection guidance. The user may be guided by a voice method based on auditory guidance. For example, if the user is correctly positioned this time, the probe can reach the target position, or the user can be guided to the target position through a training model, and then the user can be prompted in various ways, for example, a prompt sound of dripping is sent. Further, the user may be guided by tactile means, such as one or more of tactile guidance, vibration guidance, traction guidance.
The ultrasonic simulation method provided by the embodiment of the invention can guide the user to find the standard tangent plane through different visual, auditory and tactile modes, so that the user can obtain training in the guided process, the guide mode can be selected according to the actual application condition, the experience of the user is further improved, and the training effect of the user is also improved.
In one embodiment, to improve the efficiency of virtual training, the movement path, the target position, and the ultrasound probe may also be displayed in real time. As shown in fig. 6, the scanning guide area 1000 displayed on the display includes at least a first guide area 1600 and a second guide area 1700, where the first guide area 1600 displays at least the position information and the angle information of the current ultrasound probe, the position information and the angle information of the probe corresponding to the standard tangent plane, and the operation prompt information. The operation prompt information at least comprises the translation distance and the selected angle, and can also be the pressure of the ultrasonic probe pressed down. The second guiding region includes the object to be detected 1100, the target scanned object 1500 highlighted on the object to be detected 1100, the current ultrasound probe 1200, the movement path 1400, and the target virtual probe 1300, it being understood that the highlighting may be highlighting of the entire target scanned object 1500 or the outline of the target scanned object 1500. The current ultrasound probe 1200 moves according to its real-time position, and the target virtual probe 1300 needs to move to a position to obtain the ultrasound probe corresponding to the standard tangent plane.
According to the ultrasonic simulation training method provided by the embodiment of the invention, the spatial information between the probe and the scanned part of the detection object is acquired, so that the ultrasonic image corresponding to the scanned part can be generated and displayed by the pre-trained training model, a user can intuitively know the association between the operation of the probe and the ultrasonic image, and the user can more conveniently train to acquire a high-quality ultrasonic image. Meanwhile, the ultrasonic simulation training method can be used for training by adopting a real probe and can also be used for training by adopting a virtual probe, so that different training requirements are met. In addition, the training model can be updated in real time according to the actual training situation, and the use experience of the user is further improved.
In an embodiment, the training model is obtained by pre-training according to the following method, as shown in fig. 7, which specifically includes the following steps:
s101: acquiring an ultrasonic image of a detection object and relative spatial position information of a corresponding ultrasonic probe relative to the detection object, and inputting the ultrasonic image into a first convolution neural network for feature extraction to obtain feature image data; specifically, before training, a real ultrasound probe may be used to perform an ultrasound scanning on the detection object along a preset direction, and an ultrasound image of the detection object and relative spatial position information of the corresponding ultrasound probe with respect to the detection object may be acquired. The first convolutional neural network may be the first convolutional neural network employed in S401.
S102: inquiring in the three-dimensional data model according to the characteristic image data and the relative spatial position information of the corresponding ultrasonic probe relative to the detection object, and judging whether existing characteristic image data exist at the corresponding position in the three-dimensional data model; the three-dimensional data model can be a data model formed by some existing ultrasonic images and the relative spatial position information of the corresponding ultrasonic probe relative to the detection object. But the data sources may be few or the data is not very accurate and cannot meet the requirements of the training model. Therefore, after the characteristic image data is obtained, searching query can be carried out in the three-dimensional data model, and whether the existing characteristic image data exists at the corresponding position in the three-dimensional data model or not is judged.
S103: and when existing characteristic image data exist at corresponding positions in the three-dimensional data model, inputting the existing characteristic image data and the characteristic image data into a second convolution neural network for fusion to obtain fused characteristic image data.
Specifically, as shown in fig. 8, the second convolutional neural network includes two input loops, namely, a current feature image data loop (upper loop) for inputting the feature image data processed by the first convolutional neural network; and an existing characteristic image data loop (lower loop) for inputting existing characteristic image data obtained by inquiring the corresponding spatial position information of the ultrasonic probe in the three-dimensional ultrasonic model.
As shown in fig. 8, the second convolutional neural network copies and fuses the two paths of feature image data after the first convolution to form a fused data processing loop of the middle layer. The three data processing loops are processed in the same processing mode, namely, the three data processing loops are processed by adopting the same structure as the first convolutional neural network. The difference of the three processing loops is that the current characteristic image data loop processes current characteristic image data output by the first convolution neural network, the existing characteristic image data loop processes existing characteristic image data in a three-dimensional data model, the middle layer fuses the current characteristic image data and the existing characteristic image data, and the model finally fuses a fused image from extracted characteristics by using bilinear interpolation and convolution layers. The second convolutional neural network model adopts a multi-loop form, and feature extraction is enhanced. And the multi-scale features are respectively fused and are respectively added to the middle loop at different resolutions, and finally, a comprehensive multi-scale information fusion feature image is formed.
S104: and updating the three-dimensional data model according to the fusion characteristic image data to obtain a training model. Specifically, the obtained fusion feature image data may replace the existing feature image data at the original position in the three-dimensional data model, so as to update the three-dimensional data model.
S105: and when the existing characteristic image data do not exist at the corresponding position in the three-dimensional data model, updating the three-dimensional data model according to the characteristic image data and the corresponding relative spatial position information of the ultrasonic probe relative to the detection object to obtain a training model. And when the existing characteristic image data do not exist in the corresponding position, storing the obtained characteristic image and the corresponding position signal into the three-dimensional data model to obtain the training model.
According to the ultrasonic simulation training method provided by the embodiment of the invention, on the basis of the existing three-dimensional data model, the three-dimensional data model is updated through the first convolutional neural network and the second convolutional neural network to obtain the required training model, so that the accuracy of the training model can be improved, and meanwhile, the data source of the training model is expanded, so that the training model can meet the requirements of various types of training, and the use experience of a user is improved.
In one embodiment, since the training model is obtained by training data scanned by the ultrasonic probe, there may be a missing situation, so that whether the training model is complete can be determined by matching the model. As shown in fig. 9, the method can be specifically realized by the following steps:
s201: acquiring image data of a detection object according to CT scanning or MRI scanning; specifically, the three-dimensional contour model may be constructed by acquiring image data of the entire detection object and corresponding position information by using Computed Tomography (CT) or Magnetic Resonance Imaging (MRI).
S202: matching the image data with the characteristic images in the training model according to the matching model, and judging whether the detection object is missed; specifically, the training process of the matching model can be realized by the following method: inputting the characteristic image into a first three-dimensional CNN network to obtain a three-dimensional ultrasonic model, inputting image data into a second three-dimensional CNN network to obtain a three-dimensional contour model, respectively inputting the three-dimensional ultrasonic model and the three-dimensional contour model into a three-dimensional GCN network for transformation to obtain a transformation matrix, and then performing three-dimensional model transformation on the transformation matrix and the three-dimensional ultrasonic model to obtain a matching model. When actual matching is carried out, image data and characteristic images in the training model can be input into the matching model to be matched, and whether missing scanning exists or not can be judged.
S203: and when the scanning omission exists, sending out a scanning omission prompt. Wherein, the missing scanning prompt is one or more of a voice prompt, a vibration prompt or an indicator light.
According to the ultrasonic simulation training method provided by the embodiment of the invention, path planning is respectively carried out according to the type of the ultrasonic probe, and a user can determine a moving path by adopting the method no matter a virtual ultrasonic probe or a real ultrasonic probe is adopted, so that the ultrasonic probe is guided to carry out moving scanning based on the moving path, and the requirements of different users are met.
In one embodiment, the ultrasound simulation training method further comprises: obtaining a standard tangent plane corresponding to the scanned part according to the training model at least based on the ultrasonic image of the scanned part of the detected object, and performing quality evaluation on the ultrasonic image based on the standard tangent plane; and/or evaluating an actual movement path of the ultrasound probe based on the generated movement path of the ultrasound probe.
Specifically, when a real ultrasonic probe is used for training, the current ultrasonic image of the scanned part scanned by the ultrasonic probe can be acquired, and then compared with the standard tangent plane image of the scanned part set in the training model, so that the quality of the ultrasonic image can be evaluated, and the actual operation of a user can be corrected. Meanwhile, the actual moving path of the user can be evaluated according to the generated moving path, so that the operation capacity of the user can be evaluated.
In one embodiment, the training model may also be updated based on a particular quality assessment value and/or one of the ultrasound probe actual movement path assessment values. Therefore, new models are continuously generated, the training simulation difficulty is improved, or the use mode (such as the position, the angle, the force and the like of the probe) of a user is corrected. For example, when assessing the user's ability improvement, the user may be continually provided with more difficult training questions, such as adjusting from a vascular scan of the arm to performing a carotid vascular scan, or from a lean vascular scan to a obese person.
In some embodiments, a new three-dimensional data model may be generated according to the human-computer interaction information, for example, first, a user uses an ultrasound probe to detect a certain part or tissue of an object to be detected, such as a blood vessel on an arm, and obtains an ultrasound image of the blood vessel, and the user may perform a measurement operation on the ultrasound image. And if the current ultrasonic image or the measurement result of the blood vessel does not meet the clinical requirement, moving the probe to generate a new ultrasonic image. The training model generates a related new three-dimensional data model according to the measurement operation and the movement operation of the user, and is used for improving the training difficulty, correcting the wrong operation action of the user and adjusting the operation method of the user so as to improve the training effect.
The ultrasonic simulation training method provided by the embodiment of the invention can evaluate the capability of the user by acquiring the ultrasonic image of the current training of the user, so that the training content is adjusted according to the actual condition of the user, and the method is more targeted and improves the training effect.
An embodiment of the present invention further discloses an ultrasound simulation training apparatus, as shown in fig. 10, the apparatus includes:
the scanning module 10 is configured to scan a detection object by using an ultrasonic probe, where the ultrasonic probe includes a virtual ultrasonic probe or a real ultrasonic probe; for details, refer to the related description of step S100 in the above method embodiment.
The part determining module 20 is configured to determine a scanning part scanned by the ultrasonic probe according to external input information or spatial position information of the ultrasonic probe relative to the detection object; for details, refer to the related description of step S200 in the above method embodiment.
The image determining module 30 is used for determining an ultrasonic image of a scanned part according to the scanned part of the ultrasonic probe and the training module, displaying the ultrasonic image, and updating the training module according to an actual training condition; for details, refer to the related description of step S300 in the above method embodiment.
And the path determining module 40 is used for generating a moving path of the ultrasonic probe according to the training model based on the type of the ultrasonic probe and guiding the ultrasonic probe to perform moving scanning based on the moving path. For details, refer to the related description of step S400 in the above method embodiment.
According to the ultrasonic simulation training device provided by the embodiment of the invention, the spatial information between the probe and the scanned part of the detection object is acquired, so that the ultrasonic image corresponding to the scanned part can be generated and displayed by the pre-trained training model, a user can intuitively know the association between the operation of the probe and the ultrasonic image, and the user can more conveniently train to acquire a high-quality ultrasonic image. Meanwhile, the ultrasonic simulation training device can be used for training by adopting a real probe and can also be used for training by adopting a virtual probe, so that different training requirements are met. In addition, the training model can be updated in real time according to the actual training situation, and the use experience of the user is further improved.
The function of the ultrasound simulation training device provided by the embodiment of the invention is described in detail with reference to the ultrasound simulation training method in the above embodiment.
An embodiment of the present invention further provides a storage medium, as shown in fig. 11, on which a computer program 601 is stored, where the instructions are executed by a processor to implement the steps of the ultrasound simulation training method in the foregoing embodiment. The storage medium is also stored with audio and video stream data, characteristic frame data, an interactive request signaling, encrypted data, preset data size and the like. The storage medium may be a magnetic Disk, an optical Disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a flash Memory (FlashMemory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid-State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
An ultrasound apparatus is further provided in an embodiment of the present invention, as shown in fig. 12, the ultrasound apparatus may include a processor 51 and a memory 52, where the processor 51 and the memory 52 may be connected by a bus or in another manner, and fig. 12 illustrates an example of connection by a bus.
The processor 51 may be a Central Processing Unit (CPU). The Processor 51 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 52, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as the corresponding program instructions/modules in the embodiments of the present invention. The processor 51 executes various functional applications and data processing of the processor by executing non-transitory software programs, instructions and modules stored in the memory 52, so as to implement the ultrasound simulation training method in the above method embodiment.
The memory 52 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 51, and the like. Further, the memory 52 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 52 may optionally include memory located remotely from the processor 51, and these remote memories may be connected to the processor 51 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 52 and, when executed by the processor 51, perform the ultrasound simulation training method of the embodiment shown in fig. 1-9.
The details of the ultrasonic device can be understood by referring to the corresponding descriptions and effects in the embodiments shown in fig. 1 to 9, and are not described herein again.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.
Claims (10)
1. An ultrasound simulation training method, comprising:
scanning a detection object by an ultrasonic probe, wherein the ultrasonic probe comprises a virtual ultrasonic probe or a real ultrasonic probe;
determining a scanning part scanned by the ultrasonic probe according to external input information or the spatial position information of the ultrasonic probe relative to the detection object;
determining an ultrasonic image of a scanned part according to the scanned part of the ultrasonic probe and a training model, displaying the ultrasonic image, and updating the training model according to an actual training condition;
and generating a moving path of the ultrasonic probe according to the training model based on the type of the ultrasonic probe, and guiding the ultrasonic probe to perform moving scanning based on the moving path.
2. The ultrasound simulation training method according to claim 1, wherein the training model is pre-trained according to the following method,
acquiring an ultrasonic image of a detection object and relative spatial position information of a corresponding ultrasonic probe relative to the detection object, and inputting the ultrasonic image into a first convolution neural network for feature extraction to obtain feature image data;
inquiring in a three-dimensional data model according to the characteristic image data and the relative spatial position information of the corresponding ultrasonic probe relative to the detection object, and judging whether existing characteristic image data exist at the corresponding position in the three-dimensional data model;
when existing characteristic image data exist at corresponding positions in the three-dimensional data model, inputting the existing characteristic image data and the characteristic image data into a second convolutional neural network for fusion to obtain fused characteristic image data;
and updating the three-dimensional data model according to the fusion characteristic image data to obtain the training model.
3. The ultrasound simulation training method according to claim 2, further comprising:
and when no existing characteristic image data exists at the corresponding position in the three-dimensional data model, updating the three-dimensional data model according to the characteristic image data and the relative spatial position information of the corresponding ultrasonic probe relative to the detection object to obtain the training model.
4. The ultrasound simulation training method according to claim 2, further comprising:
acquiring image data of a detection object according to CT scanning or MRI scanning;
matching the image data with the characteristic image data in the training model according to a matching model, and judging whether the detection object is missed;
and when the scanning omission exists, sending out a scanning omission prompt.
5. The ultrasound simulation training method according to claim 1, wherein a moving path of the ultrasound probe is generated according to the training model based on the type of the ultrasound probe, and the ultrasound probe is guided to perform a moving scan based on the moving path, and the method comprises the following steps:
when the ultrasonic probe is a virtual ultrasonic probe, determining the spatial position of a target tangent plane according to the target tangent plane and a training model;
and generating a moving path of the ultrasonic probe according to the current position information of the ultrasonic probe and the space position of the target tangent plane, and guiding the ultrasonic probe to perform moving scanning based on the moving path.
6. The ultrasound simulation training method according to claim 1, wherein a moving path of the ultrasound probe is generated according to the training model based on the type of the ultrasound probe, and the ultrasound probe is guided to perform a mobile scanning based on the moving path, further comprising:
when the ultrasonic probe is a real ultrasonic probe, inputting a current ultrasonic image of a scanned part scanned by the ultrasonic probe into a first convolution neural network for processing to obtain a current ultrasonic characteristic image;
inputting the current ultrasonic characteristic image into a third convolutional neural network for simplification to obtain a current ultrasonic simplified characteristic image;
determining the existing ultrasonic image at the corresponding spatial position in the training model according to the spatial position information of the ultrasonic probe relative to the scanned part and the training model;
inputting the existing ultrasonic image into the third convolutional neural network for simplification to obtain an existing ultrasonic simplified image;
fully connecting and processing the current ultrasonic simplified feature image and the existing ultrasonic simplified image to obtain a position difference value;
and generating a moving path of the ultrasonic probe according to the position difference and the target tangent plane, and guiding the ultrasonic probe to perform moving scanning based on the moving path.
7. The ultrasound simulation training method according to claim 1, further comprising:
obtaining a standard tangent plane corresponding to the scanned part according to the training model at least based on the ultrasonic image of the scanned part of the detection object, and performing quality evaluation on the ultrasonic image based on the standard tangent plane; and/or evaluating an actual movement path of the ultrasound probe based on generating the movement path of the ultrasound probe.
8. An ultrasound simulation training apparatus, comprising:
the scanning module is used for scanning the detection object by adopting an ultrasonic probe, and the ultrasonic probe comprises a virtual ultrasonic probe or a real ultrasonic probe;
the part determining module is used for determining a scanning part scanned by the ultrasonic probe according to external input information or the spatial position information of the ultrasonic probe relative to the detection object;
the image determining module is used for determining an ultrasonic image of a scanned part according to the scanned part of the ultrasonic probe and the training module, displaying the ultrasonic image, and updating the training model according to an actual training condition;
and the path determining module is used for generating a moving path of the ultrasonic probe according to the training model based on the type of the ultrasonic probe and guiding the ultrasonic probe to perform moving scanning based on the moving path.
9. A computer-readable storage medium storing computer instructions for causing a computer to perform the ultrasound simulation training method according to any one of claims 1 to 7.
10. An ultrasound device, comprising: a memory and a processor communicatively coupled to each other, the memory storing computer instructions, the processor executing the computer instructions to perform the ultrasound simulation training method of any of claims 1-7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110696904.2A CN113470495A (en) | 2020-11-04 | 2020-11-04 | Ultrasonic simulation training method and device, storage medium and ultrasonic equipment |
CN202011220774.7A CN112331049B (en) | 2020-11-04 | 2020-11-04 | Ultrasonic simulation training method and device, storage medium and ultrasonic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011220774.7A CN112331049B (en) | 2020-11-04 | 2020-11-04 | Ultrasonic simulation training method and device, storage medium and ultrasonic equipment |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110696904.2A Division CN113470495A (en) | 2020-11-04 | 2020-11-04 | Ultrasonic simulation training method and device, storage medium and ultrasonic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112331049A true CN112331049A (en) | 2021-02-05 |
CN112331049B CN112331049B (en) | 2021-07-02 |
Family
ID=74316112
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011220774.7A Active CN112331049B (en) | 2020-11-04 | 2020-11-04 | Ultrasonic simulation training method and device, storage medium and ultrasonic equipment |
CN202110696904.2A Pending CN113470495A (en) | 2020-11-04 | 2020-11-04 | Ultrasonic simulation training method and device, storage medium and ultrasonic equipment |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110696904.2A Pending CN113470495A (en) | 2020-11-04 | 2020-11-04 | Ultrasonic simulation training method and device, storage medium and ultrasonic equipment |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN112331049B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112807025A (en) * | 2021-02-08 | 2021-05-18 | 威朋(苏州)医疗器械有限公司 | Ultrasonic scanning guiding method, device, system, computer equipment and storage medium |
CN113257100A (en) * | 2021-05-27 | 2021-08-13 | 郭山鹰 | Remote ultrasonic teaching system |
CN113274051A (en) * | 2021-04-30 | 2021-08-20 | 中国医学科学院北京协和医院 | Ultrasonic auxiliary scanning method and device, electronic equipment and storage medium |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113951922A (en) * | 2021-10-26 | 2022-01-21 | 深圳迈瑞动物医疗科技有限公司 | Ultrasonic imaging equipment and scanning prompting method thereof |
CN113951923A (en) * | 2021-10-26 | 2022-01-21 | 深圳迈瑞动物医疗科技有限公司 | Ultrasonic imaging equipment for animals, ultrasonic imaging equipment and scanning method thereof |
CN114098818B (en) * | 2021-11-22 | 2024-03-26 | 邵靓 | Analog imaging method of ultrasonic original image data |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102016957A (en) * | 2008-02-25 | 2011-04-13 | 发明医药有限公司 | Medical training method and apparatus |
CN104303075A (en) * | 2012-04-01 | 2015-01-21 | 艾里尔大学研究与开发有限公司 | Device for training ultrasonic imaging device user |
US20160328998A1 (en) * | 2008-03-17 | 2016-11-10 | Worcester Polytechnic Institute | Virtual interactive system for ultrasound training |
CN107578662A (en) * | 2017-09-01 | 2018-01-12 | 北京大学第医院 | A kind of virtual obstetric Ultrasound training method and system |
CN109447940A (en) * | 2018-08-28 | 2019-03-08 | 天津医科大学肿瘤医院 | Convolutional neural networks training method, ultrasound image recognition positioning method and system |
CN110967730A (en) * | 2019-12-09 | 2020-04-07 | 深圳开立生物医疗科技股份有限公司 | Ultrasonic image processing method, system, equipment and computer storage medium |
CN110960262A (en) * | 2019-12-31 | 2020-04-07 | 上海杏脉信息科技有限公司 | Ultrasonic scanning system, method and medium |
CN111657997A (en) * | 2020-06-23 | 2020-09-15 | 无锡祥生医疗科技股份有限公司 | Ultrasonic auxiliary guiding method, device and storage medium |
CN111860636A (en) * | 2020-07-16 | 2020-10-30 | 无锡祥生医疗科技股份有限公司 | Measurement information prompting method and ultrasonic training method |
-
2020
- 2020-11-04 CN CN202011220774.7A patent/CN112331049B/en active Active
- 2020-11-04 CN CN202110696904.2A patent/CN113470495A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102016957A (en) * | 2008-02-25 | 2011-04-13 | 发明医药有限公司 | Medical training method and apparatus |
US20160328998A1 (en) * | 2008-03-17 | 2016-11-10 | Worcester Polytechnic Institute | Virtual interactive system for ultrasound training |
CN104303075A (en) * | 2012-04-01 | 2015-01-21 | 艾里尔大学研究与开发有限公司 | Device for training ultrasonic imaging device user |
CN107578662A (en) * | 2017-09-01 | 2018-01-12 | 北京大学第医院 | A kind of virtual obstetric Ultrasound training method and system |
CN109447940A (en) * | 2018-08-28 | 2019-03-08 | 天津医科大学肿瘤医院 | Convolutional neural networks training method, ultrasound image recognition positioning method and system |
CN110967730A (en) * | 2019-12-09 | 2020-04-07 | 深圳开立生物医疗科技股份有限公司 | Ultrasonic image processing method, system, equipment and computer storage medium |
CN110960262A (en) * | 2019-12-31 | 2020-04-07 | 上海杏脉信息科技有限公司 | Ultrasonic scanning system, method and medium |
CN111657997A (en) * | 2020-06-23 | 2020-09-15 | 无锡祥生医疗科技股份有限公司 | Ultrasonic auxiliary guiding method, device and storage medium |
CN111860636A (en) * | 2020-07-16 | 2020-10-30 | 无锡祥生医疗科技股份有限公司 | Measurement information prompting method and ultrasonic training method |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112807025A (en) * | 2021-02-08 | 2021-05-18 | 威朋(苏州)医疗器械有限公司 | Ultrasonic scanning guiding method, device, system, computer equipment and storage medium |
CN113274051A (en) * | 2021-04-30 | 2021-08-20 | 中国医学科学院北京协和医院 | Ultrasonic auxiliary scanning method and device, electronic equipment and storage medium |
CN113274051B (en) * | 2021-04-30 | 2023-02-21 | 中国医学科学院北京协和医院 | Ultrasonic auxiliary scanning method and device, electronic equipment and storage medium |
CN113257100A (en) * | 2021-05-27 | 2021-08-13 | 郭山鹰 | Remote ultrasonic teaching system |
Also Published As
Publication number | Publication date |
---|---|
CN112331049B (en) | 2021-07-02 |
CN113470495A (en) | 2021-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112331049B (en) | Ultrasonic simulation training method and device, storage medium and ultrasonic equipment | |
US20200402425A1 (en) | Device for training users of an ultrasound imaging device | |
CN111758137A (en) | Method and apparatus for telemedicine | |
JP2019514476A (en) | Positioning of ultrasound imaging probe | |
KR20160016467A (en) | Ultrasonic Diagnostic Apparatus | |
KR102545008B1 (en) | Ultrasound imaging apparatus and control method for the same | |
JP2020137974A (en) | Ultrasonic probe navigation system and navigation display device therefor | |
US20190188858A1 (en) | Image processing device and method thereof | |
US11896434B2 (en) | Systems and methods for frame indexing and image review | |
JP5390149B2 (en) | Ultrasonic diagnostic apparatus, ultrasonic diagnostic support program, and image processing apparatus | |
CN116912430B (en) | Device for constructing three-dimensional digital twin system of remote intervention operating room | |
CN113870636B (en) | Ultrasonic simulation training method, ultrasonic device and storage medium | |
JP2017515572A (en) | Acquisition orientation dependent features for model-based segmentation of ultrasound images | |
CN115953532A (en) | Method and device for displaying ultrasonic image for teaching and teaching system of ultrasonic image | |
CN114694442B (en) | Ultrasonic training method and device based on virtual reality, storage medium and ultrasonic equipment | |
KR20170031718A (en) | Remote apparatus with interworking between smart device and sonographer | |
CN111419272B (en) | Operation panel, doctor end controlling means and master-slave ultrasonic detection system | |
JP7183451B2 (en) | Systems, devices, and methods for assisting in neck ultrasound | |
JP7164423B2 (en) | MEDICAL IMAGE PROCESSING APPARATUS, X-RAY CT APPARATUS, AND MEDICAL IMAGE PROCESSING METHOD | |
CN113509201A (en) | Ultrasonic diagnostic apparatus and ultrasonic diagnostic system | |
CN111568469A (en) | Method and apparatus for displaying ultrasound image and computer program product | |
US12016731B2 (en) | Ultrasound credentialing system | |
KR102364490B1 (en) | Untrasound dianognosis apparatus, method and computer-readable storage medium | |
US12138117B2 (en) | One-dimensional position indicator | |
US20230255590A1 (en) | One-dimensional position indicator |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |