CN118236174B - Surgical assistance system, method, electronic device, and computer storage medium - Google Patents

Surgical assistance system, method, electronic device, and computer storage medium Download PDF

Info

Publication number
CN118236174B
CN118236174B CN202410296468.3A CN202410296468A CN118236174B CN 118236174 B CN118236174 B CN 118236174B CN 202410296468 A CN202410296468 A CN 202410296468A CN 118236174 B CN118236174 B CN 118236174B
Authority
CN
China
Prior art keywords
image
target
display
target object
detector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410296468.3A
Other languages
Chinese (zh)
Other versions
CN118236174A (en
Inventor
任臻
夏毅
刘姣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Air Force Medical University of PLA
Original Assignee
Air Force Medical University of PLA
Filing date
Publication date
Application filed by Air Force Medical University of PLA filed Critical Air Force Medical University of PLA
Priority to CN202410296468.3A priority Critical patent/CN118236174B/en
Publication of CN118236174A publication Critical patent/CN118236174A/en
Application granted granted Critical
Publication of CN118236174B publication Critical patent/CN118236174B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The disclosure provides a surgical assistance system, a surgical assistance method, electronic equipment and a computer storage medium, and belongs to the technical field of medical equipment. The system comprises: the system comprises a reference image detector, an auxiliary image detector, a data processing unit and a display unit; a reference image detector configured to acquire a reference image of an intraoperative surgical site in real time; an auxiliary image detector configured to acquire an auxiliary image of an intraoperative surgical site in real time; the data processing unit is configured to enhance and display the target object in the reference image according to the position and the outline of the target object in the auxiliary image to obtain a target fusion image; and a display unit configured to display the target fusion image. The display target object can be enhanced in real time during operation.

Description

Surgical assistance system, method, electronic device, and computer storage medium
Technical Field
The present disclosure relates to the field of medical devices, and in particular, to a surgical assistance system, a surgical assistance method, an electronic device, and a computer storage medium.
Background
With the development of medical imaging techniques, various imaging techniques are applied both preoperatively and intraoperatively to assist doctors in performing surgery. Static images are typically acquired prior to surgery to assist a physician in generally determining the location of a target object (e.g., a tumor), and in which the specific location of the target object may be determined by imaging techniques such as endoscopy.
However, the operation surface is a non-rigid surface, so that in the operation process, human tissues, the body position of a patient and the operation solution section can be changed, the position of a target object can be changed, and a doctor needs to continuously determine the position of the target object, so that the operation difficulty and the operation duration are increased.
Disclosure of Invention
The present disclosure provides a surgical assistance system, method, electronic device, and computer storage medium; the display target object can be enhanced in real time during operation.
The technical scheme of the present disclosure is realized as follows:
in a first aspect, the present disclosure provides a surgical assistance system comprising: the system comprises a reference image detector, an auxiliary image detector, a data processing unit and a display unit;
a reference image detector configured to acquire a reference image of an intraoperative surgical site in real time;
An auxiliary image detector configured to acquire an auxiliary image of an intraoperative surgical site in real time;
The data processing unit is configured to enhance and display the target object in the reference image according to the position and the outline of the target object in the auxiliary image to obtain a target fusion image;
and a display unit configured to display the target fusion image.
In a second aspect, the present disclosure provides a surgical assist method comprising: acquiring a reference image of an intraoperative operation site in real time; acquiring auxiliary images of an operation part in operation in real time; according to the position and the outline of the target object in the auxiliary image, the target object is enhanced and displayed in the reference image, and a target fusion image is obtained; and displaying the target fusion image.
In a third aspect, the present disclosure provides an electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which program or instruction when executed by the processor implements the steps of the surgical assistance method as described in the second aspect.
In a fourth aspect, the present disclosure provides a computer readable storage medium having stored thereon a program or instructions which when executed by a processor performs the surgical assistance as described in the second aspect. The method comprises the steps of.
In a fifth aspect, the present disclosure provides a computer program product, wherein the computer program product comprises a computer program or instructions which, when run on a processor, cause the processor to execute the computer program or instructions for carrying out the steps of the surgical assistance method as described in the second aspect.
In a sixth aspect, the present disclosure provides a chip comprising a processor and a communication interface coupled to the processor for running a program or instructions implementing the surgical assistance method according to the second aspect.
The present disclosure provides a surgical assistance system, the system comprising: the system comprises a reference image detector, an auxiliary image detector, a data processing unit and a display unit; a reference image detector configured to acquire a reference image of an intraoperative surgical site in real time; an auxiliary image detector configured to acquire an auxiliary image of an intraoperative surgical site in real time; the data processing unit is configured to enhance and display the target object in the reference image according to the position and the outline of the target object in the auxiliary image to obtain a target fusion image; and a display unit configured to display the target fusion image. In the operation process, the operation surface belongs to a non-rigid surface, and the position of the target object is possibly deviated due to the change of human tissues, the body position of a patient, the operation anatomical surface and the like, so that the time for positioning the target object by a doctor is increased.
Drawings
FIG. 1 is a block diagram of a surgical assistance system provided by the present disclosure;
FIG. 2 is a schematic illustration of one of the configurations of a surgical assistance system provided by the present disclosure;
FIG. 3 is a second schematic view of a surgical assistance system according to the present disclosure;
FIG. 4 is a schematic diagram of coordinate transformation of a reference image and an ultrasound image provided by the present disclosure;
FIG. 5 is a schematic flow chart of determining a target tumor from ultrasound data provided by the present disclosure;
FIG. 6 is a schematic illustration of enhanced display of a target tumor in a reference image provided by the present disclosure;
FIG. 7 is a schematic diagram of displaying distance information in a target fusion image provided by the present disclosure;
FIG. 8 is a third schematic view of a surgical assistance system provided by the present disclosure;
FIG. 9 is a schematic diagram of coordinate transformation of a reference image and a thermographic image provided by the present disclosure;
FIG. 10 is a schematic illustration of enhanced display of a target vessel in a reference image provided by the present disclosure;
FIG. 11 is a schematic flow chart of determining a target blood vessel in a thermal imaging image by a target detection model provided by the present disclosure;
FIG. 12 is a schematic illustration of a surgical assistance system provided by the present disclosure;
FIG. 13 is a schematic illustration of a comparison of fused images provided by the present disclosure;
FIG. 14 is a schematic diagram of a surgical assistance system provided by the present disclosure;
FIG. 15 is a schematic view of a remote device provided by the present disclosure interacting with a surgical assistance system;
FIG. 16 is a schematic illustration of an annotated image provided by the present disclosure;
FIG. 17 is a schematic illustration of a surgical assist system provided by the present disclosure;
FIG. 18 is a flow chart of a surgical assist method provided by the present disclosure;
fig. 19 is a schematic hardware structure of an electronic device provided in the present disclosure.
Detailed Description
The following description of the embodiments of the present application will be made with reference to the accompanying drawings in which it is evident that the embodiments described are some, but not all, embodiments of the present application. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
Among medical imaging techniques, medical imaging techniques commonly employed prior to surgery include electronic computed tomography (Computed Tomogra phy, CT), magnetic resonance imaging (Magnetic Resonance Imaging, MRI), and the like.
Wherein CT imaging generates cross-sectional images of the interior of the body by combining X-ray imaging techniques with computer analysis. The basic principle of CT imaging is as follows: the CT scanner has an X-ray source and an opposing X-ray detector inside. As the X-ray source rotates around the patient, it emits beamlets of X-rays from a plurality of angles through the body. These X-rays are captured by the detector after penetrating body tissue. The degree of absorption of X-rays by different types of tissue (e.g., bone, muscle, fat, and air) is different, and thus the intensity of X-rays received by the detector is also different. The X-ray signals captured by the detector are then converted into electrical signals, which are then converted into digital data for processing by a computer. A computer processes these data obtained from multiple angles using an algorithm called tomographic reconstruction. By analyzing the X-ray penetration and absorption at different angles, the computer can calculate the density of each small volume element (called voxel) within the scan area. The density of these voxels is then converted to a different gray scale for generating an image. Finally, these reconstructed cross-sectional images are displayed on a computer screen for analysis and interpretation by a physician.
The principle of MRI imaging is: MRI uses strong magnetic fields and radio waves to acquire detailed images of the internal structures of the human body. In particular, during an MRI scan, the patient is located in a strong magnetic field. This magnetic field aligns the spin direction of the hydrogen nuclei (mainly water molecules and hydrogen atoms in fat) inside the body. Once the hydrogen nuclei are aligned, the scanner emits a series of radio wave pulses of a particular frequency that temporarily disrupt the alignment of the hydrogen nuclei. When the radio wave pulse ceases, the hydrogen nuclei realign to the original state and release energy during the process. This energy is detected in the form of a radio signal by the receiver of the MRI machine. Different types of tissue (e.g., fat and water) will realign the hydrogen nuclei at different rates and thus the energy released will also be different. The detected signals are fed into a computer which processes the signals by complex mathematical and physical algorithms and converts them into images. The brightness of each point depends on the rate and amount of realignment and release of energy of hydrogen nuclei in the tissue at that point, which reflects the nature and state of the different tissues. The resulting images may show different body structures including brain tissue, muscles, joints, viscera, etc.
Therefore, CT or MRI images are convenient for surgical planning, and the surface and outline of the organ are clearly represented, but the tumor and critical blood vessels in the organ are inaccurately positioned. Also, since the imaging apparatuses of CT images and MRI images are large, they cannot be integrated into an auxiliary system for real-time imaging in an operation.
Endoscopic imaging, red Green Blue (RGB) imaging, thermal imaging techniques, ultrasonic imaging (multi-purpose phased array ultrasonic imaging) techniques, and the like are commonly employed in surgery.
Among them, endoscopic imaging is a medical imaging technique using an endoscopic apparatus, which allows a doctor to observe and manipulate internal organs of a human body without performing a large-scale operation. Endoscopes are elongated tubular instruments, typically equipped with a light source and a camera, capable of transmitting images of an internal organ back to an external display for viewing by a physician. In particular, one end of the endoscopic device is fitted with a light source, typically an LED lamp or other high intensity light source, to ensure that the internal organ is adequately illuminated for imaging purposes. The front end of an endoscope is typically equipped with one or more miniature cameras or optical lens systems for capturing images of internal organs. The captured image is transmitted to an external display through a fiber optic bundle or cable inside the endoscope. In conventional endoscopes, optical fibers are used to transmit images from one end of the endoscope to the other; in more modern digital endoscopes, the images are transmitted electronically through tiny cables. The transmitted back image is displayed on an external display, typically in real time, so that the physician can directly see the condition of the internal organ and perform a diagnostic or surgical procedure accordingly.
The principle of RGB imaging is: techniques for producing images of a wide range of colors by combining three colors of red, green, and blue light. The method is based on the working principle of the human visual system, and three types of color sensing cells are arranged in human eyes and are respectively and most sensitive to red light, green light and blue light. By adjusting the intensities of the three colors, colors in almost all visible spectra can be simulated. In particular, in digital imaging devices (such as digital cameras or scanners), when light passes through a lens and impinges on a sensor, a tiny photosensor (called a pixel) on the sensor captures the intensity of the light. Each pixel captures only one color of light (red, green or blue) by placing color filters over the pixel, the most common arrangement being a Bayer filter array. In a Bayer filter array, each pixel is assigned a red, green or blue sensitivity, where the number of green pixels is twice that of red or blue to better mimic the higher sensitivity of the human eye to green light. When light irradiates the sensor, each pixel only records the light intensity information of the corresponding color. Since each pixel can capture only a single color, it is necessary to reconstruct the complete color information by an image processing algorithm (e.g., interpolation algorithm). This process is typically performed in the camera's image processor, which calculates the color values of adjacent pixels to estimate the full RGB value for each pixel location. After the above processing, each pixel has a complete set of RGB values representing the color of the pixel location. These values can be mixed in different proportions to produce a wide range of colors. The final RGB image may be displayed on an electronic display. RGB imaging enables digital images to exhibit complex and rich colors in a manner similar to human eye perception.
Phased array ultrasound imaging is a technique for imaging with multiple ultrasound transmitters and receivers (i.e., arrays). The imaging principle of the array ultrasonic probe is as follows: the array ultrasound probe contains a plurality of small piezoelectric crystal elements that are capable of generating ultrasound waves under the action of a voltage. By precisely controlling the emission timing of each of these elements, ultrasonic waves can be emitted in a specific shape and direction. This method is called beamforming. The probe can adjust the direction and focus of the ultrasound beam by changing the phase relationship (i.e., relative transmit time) between the different elements. This allows scanning of different areas inside the body without physically moving the probe, while the imaging quality at different depths can also be optimized by adjusting the depth of focus. Echoes occur when ultrasound encounters tissues of varying density and elasticity, such as organ interfaces. These echoes are detected by the same piezoelectric element on the probe and converted into electrical signals. The detected electrical signals are sent to a computer for processing. By analyzing the time delay and intensity of the echo, the computer can determine the depth and reflected intensity of the echo source, which information is used to generate an image. By combining echoes from different directions and depths, a two-dimensional or three-dimensional image representing the structure in the body can be reconstructed. In this process, each pixel of the image represents the reflective properties of the tissue at the corresponding location. By changing the direction and focus of the beam in rapid succession, the array ultrasound probe is able to provide real-time dynamic images.
Thermal imaging is based on the principle of infrared radiation. The human body and other organisms generate heat due to their normal physiological activities, which is emitted from the skin surface in the form of infrared radiation. The thermal imaging detector is capable of capturing such infrared radiation and converting it into an image. In particular, infrared radiation of different intensities is emitted due to the fact that different parts of the human body are subject to subtle differences in temperature, such as caused by blood flow, inflammation or tumors, etc. The thermal imaging detector comprises one or more infrared sensors that are very sensitive to infrared radiation in a specific wavelength range. When infrared radiation is incident on these sensors, the sensors generate an electrical signal based on the intensity of the received radiation. These electrical signals are then converted into digital data for further processing. In this step, the detector typically amplifies, filters, and digitizes the signal to improve image quality. The digitized data is used to construct an image. The signal of each sensor element corresponds to a pixel on the image. The color or gray value of a pixel is determined from the intensity of the received infrared radiation, reflecting the temperature of the area being measured. In order to make a thermographic image more useful, a temperature calibration is usually required, which means that the color or gray values on the image are mapped to the actual temperature. In this way, the doctor can intuitively recognize the change of the body temperature according to the change of the color. By analyzing the temperature distribution pattern on the thermographic image, a physician may identify abnormal hot or cold spots, which may be indicative of inflammation, blood circulation problems, tumors or other medical conditions.
Therefore, the endoscope image and the RGB image serve as eyes of a doctor in operation, have the characteristic of visual intuitiveness, and are closest to an actual picture intuitively seen by naked eyes; the thermal imaging image can realize real-time imaging, is very obvious for vascular development, the ultrasonic image can realize real-time imaging, is very obvious for tumor development, but the imaging quality of the thermal imaging and the ultrasonic imaging is poor, the signal to noise ratio is low, and the interpretation is difficult.
Preoperatively and intraoperatively, various imaging techniques can be incorporated to assist a physician in surgery to locate a target object of interest to the physician. However, a static image acquired before an operation can usually only assist a doctor to roughly determine the position of a target object, but the tissue structure of a human body is complex, and even an experienced doctor can hardly accurately determine the position of the target object during the operation based on the static image. The images acquired in real time during the operation can also be because the target object is generally hidden in the tissue and organ, and the doctor can not quickly determine the position of the target object by experience and real-time imaging images.
In view of the above problems, a general solution is to use an image fusion technique, combine the advantages of each imaging technique, fuse a preoperative image with an intra-operative image, and more intuitively display the information such as the structure, the position, the surface information, the internal blood vessels, the tumor, and the like of each organ tissue by fusing the images.
Because the CT/MRI image before operation is a static image, the operation surface belongs to a non-rigid surface, human tissue, patient position and operation solution section all change during operation, the position of a target object focused by a doctor also changes, but the position of the target object in the image acquired before operation is unchanged. Therefore, the preoperative image and the intra-operative image are fused, the positioning of the target object is not accurate enough, the deviation exists in the target object displayed by the fused image, the high precision is required in the operation, and the doctor can judge the error due to the deviation, so that the operation time is prolonged.
Accordingly, the present application aims to provide a surgical assistance system capable of accurately locating and displaying a target object in real time. Fig. 1 is a surgical assistance system shown in the present disclosure. As shown in fig. 1, the surgical assistance system includes: the reference image detector 101, the auxiliary image detector 102, the data processing unit 103, the display unit 104, the reference image detector 101 are configured to acquire reference images of the surgical site in real time. An auxiliary image detector 102 configured to acquire an auxiliary image of the intraoperative surgical site in real time. And a data processing unit 103 configured to enhance and display the target object in the reference image according to the position and the outline of the target object in the auxiliary image, so as to obtain a target fusion image. And a display unit 104 configured to display the target fusion image.
The image collected by the reference image detector 101 has the characteristic that the human eye can visually watch, namely, the collected image of the operation part has small picture difference with the actual operation part, for example, the reference image has different colors from the actual picture seen by the naked eye, and the reference image is an image obtained by amplifying the actual picture by a certain multiple, and the like. Alternatively, the reference image detector 101 may be any of an endoscope detector, an RGB image detector, and an external-view mirror detector, which have the capability of capturing an image visually viewable by the naked eye.
The image acquired by the auxiliary image detector 102 is better for displaying the target object, i.e. the auxiliary image, although not intuitive for the human eye, can highlight the target object. Alternatively, the auxiliary image detector 102 may be at least one of an ultrasonic detector, a thermal imaging detector.
The target object may be any object of interest to the physician, such as a specific tissue structure, tumor, blood vessel, etc.
Therefore, in the operation process, the operation surface belongs to a non-rigid surface type, the position of the target object is likely to deviate, and the target fusion image in the scheme is updated in real time in the operation, so that the position of the target object is ensured to be accurately displayed, the outline of the target object is highlighted, the accurate positioning of a doctor can be facilitated, and the operation efficiency is improved.
In some embodiments, fig. 2 illustrates a surgical assistance system including an ultrasound probe. The auxiliary image detector 102 includes: an ultrasonic detector 1021; accordingly, the target object includes: a target tumor; the auxiliary image includes: an ultrasound image.
The ultrasonic probe may preferably be a phased array ultrasonic probe including a plurality of ultrasonic transmitters and receivers, and the phased array may be generally divided into a line shape, a matrix shape, a ring shape, and a sector shape in an array form. Phased array probes have a variety of different array arrangements, the types of which can be divided into: a one-dimensional linear array, a two-dimensional matrix, a circular array, a sector array, a concave array, a convex array, a double linear array, and the like. Different array arrangement modes can generate different sound field characteristics, so that the phased array can be applied to detection under different working conditions. According to actual needs, a phased array ultrasonic probe with proper array arrangement can be selected.
The fusion of the ultrasonic image and the reference image (such as RGB image) can adopt the augmented reality (Augmented Reality, AR) technology, which is a technology based on computer real-time calculation and multi-sensor fusion and combines the real world and virtual information.
Specifically, in order to fuse the reference image and the ultrasonic image, the ultrasonic image needs to be converted to be identical to the reference image coordinate system. In the surgical auxiliary system provided by the application, the position of the reference image detector 101 is fixed, and the ultrasonic detector 1021 has a fixed storage position when not in use, as shown in fig. 3, a reference image detector 301 shown by a dotted line is an optional storage position of the ultrasonic detector 1021. When the ultrasonic detector 1021 is removed to start working, the positioning sensor (such as two gyroscope sensors) carried on the ultrasonic detector 1021 can perform real-time inertial measurement, and the data processing unit 103 can sense the current posture of the probe and the position relative to the storage position in real time.
Illustratively, as shown in fig. 4, with the coordinate system of the reference image probe 101 as the reference coordinate system and the first pixel point at the upper left corner in the image collected by the reference image probe 101 as the origin of coordinates, the coordinates corresponding to the first pixel at the upper left corner in the image collected by the ultrasonic probe 1021 at the initial position 301 (i.e., the storage position) are determined under the reference coordinate system, the ultrasonic probe 1021 is moved from the position 301 to the position 402 during the operation, the two gyro sensors can determine the posture of the ultrasonic probe 1021 and the position relative to the initial position in real time, and the data processing unit 103 converts the ultrasonic image collected by the ultrasonic probe 1021 into the ultrasonic image under the reference coordinate system based on the posture of the ultrasonic probe 1021 and the position relative to the initial position and the coordinates corresponding to the first pixel at the upper left corner in the initial position, so that the ultrasonic image and the reference image can be fused.
The process of determining the target tumor from the acquired two-dimensional phased array ultrasound data with the auxiliary image probe 102, specifically a two-dimensional matrix phased array ultrasound probe, is shown in fig. 5. The acquisition of ultrasound data 501 is specifically: and acquiring a real-time two-dimensional ultrasonic data set by using a two-dimensional phased array ultrasonic sensor and adjusting the time delay of transmission and reception and scanning in different directions. The signal processing 502 specifically includes: processing the acquired two-dimensional ultrasound data set using a beam imaging technique (enhancing the echo signal of the target by coherently superimposing the signals of each receiving element), and demodulating and filtering the beam imaged signals to emphasize the target tumor and reduce noise; the depth and reflected intensity of the source of the ultrasound data is determined by analyzing the time delay and intensity of the processed ultrasound data, which information is used to generate an image. The image analysis 503 specifically uses an image analysis technique to detect a target tumor in the image according to different densities; and then, dividing the image into different tissue areas by using a segmentation algorithm, wherein the focus is on the target tumor. The positioning and contour extraction 504 is specifically: by analyzing the information of the target tumor in the image, removing background noise and irrelevant features in the image according to a preset response threshold, and screening important feature responses; according toDetermining the maximum response area in the graph, wherein the maximum response area is the approximate position of the target tumor; wherein R (θ) represents the maximum response in the θ direction, M represents the number of rows of pixels, N represents the number of columns of pixels, and I (I, j) represents the pixel value at pixel position I, j in the image; to further accurately determine the position and contour of the target tumor, after determining the approximate position of the target tumor, a specific gradient operator is used to determine the gradient intensity (representing the magnitude of the change in the pixel intensity of the point) and the gradient direction (representing the direction in which the change is strongest) of each pixel point in the image, only the pixel point with the local maximum gradient value in the gradient direction is reserved on the gradient intensity map, other non-maximum points are set to 0, refined edges are obtained, the true edges and potential edges are determined by using double thresholds (high threshold and low threshold), the pixel above the high threshold is regarded as a strong edge, the pixel below the low threshold is excluded, the pixel between the two is regarded as a weak edge, and whether the pixel is a true edge depends on whether the true edge is connected with the strong edge or not; finally, the weak edge points are connected to the strong edge points by using an edge tracking technology from the strong edge points, so that only when the weak edge points are connected with the strong edge points, the weak edge points are contained in a final edge image, and the final connected contour is the accurate contour information of the target tumor. The real-time update and display 505 is specifically: updating the image in real time as the ultrasound scan proceeds to reflect the new ultrasound data (e.g., a frame-by-frame image processing approach may be employed); the real-time updated image is displayed on the display unit 104 after being fused with the reference image, so that a doctor can observe the position and the outline of the target tumor in real time.
Illustratively, as shown in fig. 6, a schematic diagram of the enhanced display of the target tumor in the reference image is shown. The image indicated by reference numeral 60 is an RGB image of the surgical site acquired by an RGB probe, the image indicated by reference numeral 61 is an ultrasonic image of the surgical site acquired by an ultrasonic probe, and the image indicated by reference numeral 62 is an enhanced display of the target tumor in the reference image. Wherein 611 indicated by an arrow is the target tumor. Therefore, in the operation process, a doctor can be assisted to more accurately position the target tumor, and the operation efficiency is improved.
Alternatively, the display of the target fusion image may be a real-time fusion and continuous display; the target fusion image including the target tumor can be controlled to be displayed for a fixed time period according to the control instruction, and then the display is stopped and only the reference image is displayed when the target tumor is required, for example, based on the control instruction for indicating to highlight the target tumor; or controlling the target fusion image comprising the target tumor to be displayed for a first preset time at a first preset interval duration (for example, to be displayed for 5 times by flashing at 2s intervals), and stopping displaying, and displaying only the reference image. Therefore, a doctor can flexibly select whether to display according to actual needs, so that the doctor can not only determine the tumor position, but also not interfere with the operation process.
To further assist the physician in performing the tumor cutting procedure, the distance information of the medical instrument from the tumor may be displayed in real time, and thus, in some embodiments of the present disclosure, the data processing unit 103 is further configured to: determining distance information between the target tumor and the surgical instrument according to the ultrasonic image; the display unit 104 is further configured to display distance information.
The phased array ultrasonic positioning is very accurate. Taking intracranial glioma excision as an example, a phased array ultrasonic detector needs to be inserted into the cerebral cortex gap of a patient in advance, and the distance from the edge of the glioma is 5cm plus or minus 1cm, so that real-time area array scanning is carried out on the operation part. Because the density of the tumor is different from that of the surgical instrument, the reflection characteristic of the tumor is inconsistent with that of the ultrasonic wave, the distance between the surgical instrument and the edge of the tumor can be accurately judged, and the positioning accuracy can reach about 0.5 millimeter. Accurate distance information can be provided for doctors, and the operations of the doctors are assisted.
In this embodiment, the determination of the distance between the target tumor and the surgical instrument is specifically: based on the imaging capability and distance measurement principle of phased array ultrasonic waves, the propagation distance of each beam is determined through the propagation time length of each beam, and then the average value of the propagation distances of each beam is determined as the relative distance between the surgical instrument and the tumor edge (the basic principle is that the distance is calculated by utilizing the propagation speed of the ultrasonic waves and the time difference of the echo), and the specific formula is as follows: N represents the number of beams, D i represents the propagation distance of the ith beam, which is equal to half the product of the speed of sound and the length of the echo, and D represents the relative distance of the surgical instrument from the target tumor. In practice, factors such as the angle of the beam, the scan plane, the propagation velocity, etc. may need to be considered, and the specific calculation method depends on the performance and configuration of the ultrasonic imaging apparatus.
Illustratively, as shown in fig. 7, reference numeral 701 indicates a surgical instrument displayed in an ultrasound image, reference numeral 702 indicates a target tumor displayed in an ultrasound image, and reference numeral 703 indicates a relative distance between the surgical instrument and the target tumor 702 displayed in a target fusion image of 2.58mm.
Optionally, the distance information is output by voice broadcasting.
Specifically, the distance information can be continuously updated and output, or can be output when required according to a control instruction, for example, the distance information is controlled to be output for a fixed time period based on the control instruction for indicating the output of the distance information, and then the output is stopped; or the distance information is controlled to be output for a second preset number of times (flashing for 5 times at 2s intervals and then stopping displaying the distance information) for a second preset interval duration, and then the output is stopped. Thus, the doctor can be prompted, and the display of excessive information can be avoided from interfering with the operation.
Optionally, when the distance between the surgical instrument and the target tumor is smaller than a distance threshold corresponding to the type of the target tumor, outputting alarm information. The warning information is used for prompting a doctor that the distance between the surgical instrument and the target tumor is smaller than a distance threshold, each type of tumor corresponds to a distance threshold, the distance thresholds corresponding to different types of tumors can be the same or different, and the output mode of the warning information comprises at least one of the following: voice, text, alarm patterns and alarm indicator lamps.
In the operation process, excessive bleeding caused by cutting large blood vessels can occur, so that the pain of patients is increased while the operation is influenced. Thus, the blood vessel is also the subject of comparative attention of the physician during the surgical procedure, and thus, in some embodiments of the present disclosure, fig. 8 illustrates a surgical assistance system including an ultrasound probe. The auxiliary image detector 102 further includes: a thermal imaging detector 1022; accordingly, the target object further includes: a target blood vessel; the auxiliary image further includes: and thermally imaging the image.
During the operation, the operation surface is continuously covered by fresh blood, which seriously affects the operation. However, since the temperature of human tissue is mostly provided by arterial blood, the arterial blood temperature is greater than the venous blood temperature by a temperature difference generally between 0.5 ℃, and the present disclosure can realize high-precision temperature measurement by using a high-precision thermal imaging sensor to distinguish human tissue, blood vessels and blood that has flowed out, and determine the distribution and trend of larger arteries and veins near the operative surface.
Specifically, as shown in fig. 9, the reference image is taken as an example of an RGB image, and the positions and the relative positions of the RGB detector 1011 and the thermal imaging detector 1022 are fixed, and the distance between the two is d. The image fusion process comprises the following steps: camera calibration, which is a process of acquiring a camera internal reference matrix and distortion parameters, wherein pixel coordinates can be converted into camera coordinates through calibration, and coordinate system conversion is performed; calibration is usually performed by using objects with known structures such as checkerboards, capturing images at a plurality of angles, and obtaining a camera internal reference matrix and distortion parameters by using a camera calibration tool. Coordinate system conversion, which is to convert the coordinates of each pixel point in the image of the thermal imaging image detector into the image coordinate system of the RGB image detector, and consider the interval distance between the two cameras, so as to ensure the coordinate systems of the two images to be consistent; that is, for each pixel point (X B,YB) on the thermal imaging image, it is mapped to the coordinates (X A,YA) on the a image by the camera calibration parameters and the coordinate system conversion formula, the coordinate mapping formula is: x A=XB+2d,YA is the same as Y B. And (3) image fusion: the image fusion is to fuse the thermal imaging image after coordinate conversion onto the RGB image, copy the pixel value of each point on the thermal imaging image to the corresponding position on the RGB image, and to avoid the influence of the fused image on the actual size of the RGB image, transparency control or weighted fusion is needed, and the transparency of the thermal imaging image can be adjusted to be a superposition of the RGB image without influencing the size of the object in the RGB image.
Optionally, the display mode of the target blood vessel may be continuously updated and the target fusion image including the target blood vessel is displayed, or may be displayed when needed according to a control instruction, for example, based on the control instruction indicating to display the target blood vessel, the display is stopped after the target fusion image including the target blood vessel is controlled to display for a fixed period of time, and only the reference image is displayed; or controlling the target fusion image including the target blood vessel to be displayed for a third preset number of times (to be displayed for 5 times by flashing at intervals of 2 s) at a third preset interval duration, and stopping the display. Thus, the doctor can be prompted on the position of the lower blood vessel, and the interference to the operation can be avoided.
It should be noted that, in order to distinguish the target blood vessel from the target tumor, the color of the target tumor and the color of the target blood vessel may be different in the target fusion image, for example, the target tumor image is black, and the target blood vessel is red.
Illustratively, as shown in fig. 10, a schematic diagram of the enhanced display of the target vessel in the reference image is shown. The image indicated by the reference numeral 10 is an RGB image of the operation site acquired by an RGB detector, the image indicated by the reference numeral 11 is a thermal imaging image of the operation site acquired by a thermal imaging detector, wherein the highlighted white line in the figure is the target blood vessel; the image indicated by reference numeral 12 is to enhance and display the target blood vessel in the reference image, wherein the highlighted white line in the figure is the target blood vessel. Therefore, in the operation process, the doctor can be assisted to accurately determine the position of the target blood vessel, and the situation that the bleeding amount is increased due to miscut of a larger blood vessel is avoided.
Since just-shed blood may affect the positioning of the vessel location and profile, in order to more accurately locate the location of the target vessel, in some embodiments of the present disclosure, the data processing unit 103 is preloaded with a target detection model; before the target object is enhanced and displayed in the reference image according to the position and the contour of the target object in the auxiliary image, the data processing unit 103 is further configured to: and determining the position and the outline of the blood vessel in the auxiliary image through the target detection model.
Specifically, a flow of determining a target blood vessel in a thermal imaging image by a target detection model is shown in fig. 11. The data preparation 1101 is specifically used to collect thermal imaging images during a procedure and label each image with a corresponding label that identifies the location and contour of the blood vessel in the image. The data preprocessing 1102 is used to preprocess the thermal imaging image, including adjusting the size, graying, normalizing, denoising, etc. of the image to ensure the data quality and consistency of the input model. Model construction 1103 for constructing an initial object detection model, which may be any model suitable for detecting an object in an image; the data set partitioning 1104 is used to partition the data set into training and testing sets, ensuring that the model evaluates on unseen data. Model training 1105 is used to train the initial target detection model using a training set to obtain a target detection model, during which the model gradually learns to extract image features to correctly determine the position and contour of the blood vessel by back-propagation and optimizer updating weights. Model evaluation 1106 is used to evaluate the performance of the model using the test set, looking at the accuracy, precision, recall, etc., indicators, ensuring that the model performs well on unseen data. Model application 1107 is configured to determine the location and contour of a target vessel in a thermographic image from the target detection model based on the thermographic image.
The specific process of determining the position and the outline of the target blood vessel in the thermal imaging image through the target detection model is as follows: extracting features in the thermal imaging image through convolution operation, for example, in order to distinguish outflow blood and blood vessels, since the shapes of the outflow blood and the blood vessels are large in difference, and the temperatures are also different, the extracted features at least comprise shape features and temperature features; the obtained feature map is subjected to maximum pooling operation, the most obvious features are extracted, and a target feature map is obtained, so that the noise resistance of the network can be improved, and the calculated amount is reduced; predicting the position and the size of a target blood vessel boundary frame according to the target feature map, and determining possible candidate areas; performing a more detailed analysis of each candidate region to determine whether the target vessel is contained therein; and determining the outline of the target blood vessel according to the determined candidate region comprising the target blood vessel, and outputting the position and outline of the target blood vessel and a confidence score for indicating the predicted certainty degree.
Therefore, the target blood vessel can be positioned more accurately by introducing the target detection model into the data processing unit, and the positioning accuracy is improved.
The display in the operation process is usually near-eye display, but when the operation time is long, the near-eye display can cause visual fatigue of doctors and influence operation. Thus, in some embodiments of the present disclosure, as shown in fig. 12, the display unit is a light field display 1041; the light field display 1041 is specifically configured to display the magnified virtual image 1042 of the target fusion image at a preset position.
Among them, the light field display technology can generate a three-dimensional virtual image in front of the eyes of a user by precisely controlling the direction and position of light rays. In near-to-eye display applications, such as head-mounted display devices or augmented reality glasses, light field technology may be used to generate an enlarged virtual image, where the light field display screen zooms out the image of the conventional display screen through an optical lens, presenting a remote enlarged virtual image to the physician. Therefore, compared with the traditional near-eye display equipment, the near-eye display equipment can avoid visual fatigue of doctors, can also avoid the fogging phenomenon of the near-eye equipment, effectively ensures the continuity of operation, and reduces the labor intensity of doctors during the operation.
Conventional AR display information is displayed at the uppermost layer of the fused image, and does not conform to the near-far relationship of the actual physical world image. Accordingly, in some embodiments of the present disclosure, the data processing unit 103 is specifically configured to enhance display of the target object in the reference image according to the position and contour of the target object in the auxiliary image and the up-down positional relationship of the target object and the surgical instrument, and obtain the target fusion image including the up-down positional relationship of the target object and the surgical instrument.
When the auxiliary image is superimposed and displayed in the reference image, the target object is displayed on the upper layer of the reference image, as shown in fig. 13, and fig. 13a shows conventional AR fusion, and reference numeral 1300 indicates that the blood vessel is displayed on the upper layer of the surgical instrument in the region, but in practice, the surgical instrument should be on the upper layer of the blood vessel. In the present disclosure, when the auxiliary image is superimposed and displayed in the reference image, the auxiliary image is fused and displayed according to the upper-lower positional relationship between the surgical instrument and the target object, and the perspective rule is met, and in the target fusion image shown in fig. 13b, since the blood vessel is located at the lower layer of the surgical instrument, a part of the blood vessel in the region indicated by the reference numeral 1301 in the target fusion image is blocked by the surgical instrument at the upper layer. Namely, the digital information superimposed in the digital display picture accords with the perspective rule, and the target fusion image more accords with the actual operation condition due to the fact that the target fusion image comprises the depth information of the upper-lower position relation between the surgical instrument and the target object.
The operation assisting system provided by the present disclosure may be used for assisting a doctor in an operation, and may also be used as a teaching device for assisting a trainee in performing an operation, and thus, in some embodiments of the present disclosure, as shown in fig. 14, the operation assisting system further includes: a data communication unit 105; a data communication unit 105 configured to connect with a remote device; transmitting the content displayed by the display unit 104 to a remote device; receiving an annotation image added with an annotation returned by the remote equipment; the display unit 104 is further configured to display the annotation image.
Specifically, as shown in fig. 15, the remote device establishes a network connection with the surgical assist system through which the practicing doctor and the guiding teacher can make video and voice calls, etc. Through the network connection, the images displayed by the display unit can be transmitted to remote equipment in real time, such as RGB images or fusion images displayed by the display unit, so that a teacher can be guided to label the received images, and the labeled images are returned to the display unit for display. As shown in fig. 16, a dashed box 1601 is a blood vessel for guiding a teacher, and a dashed box 1602 is a tumor for guiding a teacher. Thus, for inexperienced trainees, the position of the target object can be quickly determined according to the labels of the guide teacher.
In order to facilitate control of the various parts of the surgical assistance system during surgery, in some embodiments of the application, as shown in fig. 17, the surgical assistance system further comprises: an instruction receiving unit 106; the instruction receiving unit is configured to receive a control instruction of a user; the data processing unit 103 is configured to control the instructions in the instructions and to implement the corresponding control.
The instruction receiving unit 106 may be a microphone for receiving a voice control instruction, a camera for receiving a gesture control instruction, or the like. Since the doctor usually takes up his hands during the operation, the instruction receiving unit is preferably a microphone.
Specifically, taking a control instruction as a voice control instruction as an example, the surgical assistance system of the present disclosure may implement at least the following control instruction. Under the condition that the system is electrified, the opening and closing of the system can be controlled by voice; controlling the display of the display unit by voice, such as displaying a fusion image, displaying a reference image, or displaying the distance between the surgical instrument and the target object; simultaneously amplifying or reducing the reference image detector and the thermal imaging image detector by voice control; controlling the display of a light field display screen; and controlling the display mode of the distance information, such as text display, voice broadcasting and the like. Thus, a doctor can control the system through the instruction, and the operation of the system in the operation process is simplified.
Optionally, as shown in fig. 17, the surgical assistance system may also include a surgical lamp 107. The surgical lamp 107 is configured to provide an illumination source during a surgical procedure. Preferably, the operating lamp 107 is a cold light illumination source.
The present disclosure also provides a surgical assistance method, as shown in fig. 18, comprising the following steps 1801 to 1804.
In step 1801, a reference image of the intraoperative surgical site is acquired in real time.
In step 1802, auxiliary images of an intraoperative surgical site are acquired in real time.
In step 1803, the target object is enhanced and displayed in the reference image according to the position and contour of the target object in the auxiliary image, so as to obtain a target fusion image.
In step 1804, the target fusion image is displayed.
In some embodiments of the present disclosure, the target object includes: a target tumor; the auxiliary image includes: an ultrasound image.
In some embodiments of the present disclosure, after acquiring an auxiliary image of an intraoperative surgical site in real time, determining distance information between a target tumor and a surgical instrument from an ultrasound image; distance information is displayed.
In some embodiments of the present disclosure, the target object further comprises: a target blood vessel; the auxiliary image further includes: and thermally imaging the image.
In some embodiments of the present disclosure, before the target object is enhanced and displayed in the reference image according to the position and the contour of the target object in the auxiliary image to obtain the target fusion image, the position and the contour of the blood vessel in the auxiliary image are determined through the target detection model.
In some embodiments of the present disclosure, displaying the target fusion image includes: and displaying the enlarged virtual image of the target fusion image at a preset position.
In some embodiments of the present disclosure, according to a position and an outline of a target object in an auxiliary image, the target object is enhanced and displayed in a reference image, to obtain a target fusion image, including: and according to the position and the outline of the target object in the auxiliary image and the up-down position relation between the target object and the surgical instrument, enhancing and displaying the target object in the reference image to obtain a target fusion image comprising the up-down position relation between the target object and the surgical instrument.
In some embodiments of the present disclosure, the method further comprises: connecting with remote equipment; transmitting the content displayed by the display unit to a remote device; receiving an annotation image added with an annotation returned by the remote equipment; and displaying the marked image.
In some embodiments of the present disclosure, the method further comprises: receiving a control instruction input by a user; based on the instruction in the control instruction, the corresponding control is implemented.
It should be noted that, the description and the technical effects of the embodiment of the operation assisting method may refer to the description of the operation assisting system, and the same technical effects may be achieved, so that the description is omitted here for avoiding repetition.
Referring to fig. 19, a block diagram of an electronic device according to an exemplary embodiment of the present disclosure is shown. In some examples, the electronic device may be at least one of a smart phone, a smart watch, a desktop computer, a laptop computer, a virtual reality terminal, an augmented reality terminal, a wireless terminal, and a laptop portable computer. The electronic device has a communication function and can access a wired network or a wireless network. An electronic device may refer broadly to one of a plurality of terminals, and those skilled in the art will recognize that the number of terminals may be greater or lesser. It will be appreciated that the electronic device performs the computing and processing operations of the technical solutions of the present disclosure, which is not limited by the present disclosure.
As shown in fig. 19, the electronic device in the present disclosure may include one or more of the following components: a processor 1910, and a memory 1920.
Optionally, the processor 1910 connects various portions of the overall electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1920, and invoking data stored in the memory 1920. Alternatively, the processor 1910 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATEARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 1910 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a neural network processor (Neural-network Processing Unit, NPU), and a baseband chip, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the touch display screen; the NPU is used for realizing an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) function; the baseband chip is used for processing wireless communication. It will be appreciated that the baseband chip may not be integrated into the processor 1910 and may be implemented by a single chip.
The Memory 1920 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (ROM). Optionally, the memory 1920 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 1920 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 1920 may include a stored program area that may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above respective method embodiments, etc., and a stored data area; the storage data area may store data created according to the use of the electronic device, etc.
In addition, those skilled in the art will appreciate that the configuration of the electronic device shown in the above-described figures does not constitute a limitation of the electronic device, and the electronic device may include more or less components than illustrated, or may combine certain components, or may have a different arrangement of components. For example, the electronic device further includes a display screen, a camera assembly, a microphone, a speaker, a radio frequency circuit, an input unit, a sensor (such as an acceleration sensor, an angular velocity sensor, a light sensor, etc.), an audio circuit, a WiFi module, a power supply, a bluetooth module, etc., which are not described herein.
The present disclosure also provides a computer-readable storage medium storing at least one instruction for execution by a processor to implement the surgical assistance method described in the various embodiments above.
The present disclosure also provides a computer program product comprising computer instructions stored in a computer-readable storage medium; the processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the electronic device executes to implement the surgical assistance method described in the above embodiments.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the above-mentioned operation auxiliary method embodiment, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, apparatuses, servers and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of skill in the art will appreciate that in one or more of the examples described above, the functions described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
It should be noted that: the embodiments described in the present disclosure may be arbitrarily combined without any collision.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention.

Claims (10)

1. A surgical assistance system, the system comprising: the image processing system comprises a reference image detector, an auxiliary image detector, a data processing unit and a display unit, wherein the data processing unit is preloaded with an object detection model, and the auxiliary image detector comprises: a thermal imaging detector;
The reference image detector is configured to acquire a reference image of an intraoperative surgical site in real time;
The auxiliary image detector is configured to acquire an auxiliary image of the surgical site in real time, the auxiliary image comprising: thermally imaging the image;
The data processing unit is configured to determine the position and the outline of a target blood vessel in the thermal imaging image according to the shape feature and the temperature feature through the target detection model, and enhance and display the target object in the reference image according to the position and the outline of the target object in the auxiliary image to obtain a target fusion image, wherein the target object comprises: a target blood vessel;
The display unit is configured to display a target fusion image including the target blood vessel.
2. The system of claim 1, wherein the auxiliary image detector further comprises: an ultrasonic detector; correspondingly, the target object further comprises: a target tumor; the auxiliary image further includes: an ultrasound image.
3. The system of claim 2, wherein the data processing unit is further configured to:
determining distance information between the target tumor and a surgical instrument according to the ultrasonic image;
the display unit is further configured to display the distance information.
4. A system according to any one of claims 1 to 3, wherein the display unit is a light field display screen;
the light field display screen is specifically configured to display an enlarged virtual image of the target fusion image at a preset position.
5. A system according to any one of claims 1 to 3, wherein the data processing unit is specifically configured to display the target object in the reference image in an enhanced manner according to the position and the outline of the target object in the auxiliary image and the up-down position relationship between the target object and the surgical instrument, so as to obtain a target fusion image comprising the up-down position relationship between the target object and the surgical instrument.
6. A system according to any one of claims 1 to 3, further comprising: a data communication unit;
The data communication unit is configured to be connected with a remote device;
transmitting the content displayed by the display unit to the remote device;
Receiving an annotation image added with an annotation returned by the remote equipment;
the display unit is further configured to display the annotation image.
7. A system according to any one of claims 1 to 3, further comprising: an instruction receiving unit;
the instruction receiving unit is configured to receive a control instruction input by a user;
the data processing unit is further configured to implement a corresponding control based on the indication in the control instruction.
8. A method of surgical assistance, the method comprising:
Acquiring a reference image of an intraoperative operation site in real time;
acquiring an auxiliary image of the surgical site in real time, the auxiliary image comprising: thermally imaging the image;
determining the position and the outline of a blood vessel in the thermal imaging image according to the shape characteristics and the temperature characteristics through a target detection model;
And according to the position and the outline of the target object in the auxiliary image, the target object is enhanced and displayed in the reference image to obtain a target fusion image, wherein the target object comprises: a target blood vessel;
And displaying a target fusion image comprising the target blood vessel.
9. An electronic device comprising a processor, a memory, and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the surgical assistance method of claim 8.
10. A computer-readable storage medium storing at least one instruction for execution by a processor to implement the surgical assistance method of claim 8.
CN202410296468.3A 2024-03-15 Surgical assistance system, method, electronic device, and computer storage medium Active CN118236174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410296468.3A CN118236174B (en) 2024-03-15 Surgical assistance system, method, electronic device, and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410296468.3A CN118236174B (en) 2024-03-15 Surgical assistance system, method, electronic device, and computer storage medium

Publications (2)

Publication Number Publication Date
CN118236174A CN118236174A (en) 2024-06-25
CN118236174B true CN118236174B (en) 2024-11-15

Family

ID=

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114648543A (en) * 2022-03-17 2022-06-21 青岛海信医疗设备股份有限公司 Remote ultrasonic image annotation method, terminal device and storage medium
CN114652443A (en) * 2022-03-08 2022-06-24 深圳高性能医疗器械国家研究院有限公司 Ultrasonic operation navigation system and method, storage medium and device
CN115375595A (en) * 2022-07-04 2022-11-22 武汉联影智融医疗科技有限公司 Image fusion method, device, system, computer equipment and storage medium
CN115953377A (en) * 2022-12-28 2023-04-11 中国科学院苏州生物医学工程技术研究所 Digestive tract ultrasonic endoscope image fusion method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114652443A (en) * 2022-03-08 2022-06-24 深圳高性能医疗器械国家研究院有限公司 Ultrasonic operation navigation system and method, storage medium and device
CN114648543A (en) * 2022-03-17 2022-06-21 青岛海信医疗设备股份有限公司 Remote ultrasonic image annotation method, terminal device and storage medium
CN115375595A (en) * 2022-07-04 2022-11-22 武汉联影智融医疗科技有限公司 Image fusion method, device, system, computer equipment and storage medium
CN115953377A (en) * 2022-12-28 2023-04-11 中国科学院苏州生物医学工程技术研究所 Digestive tract ultrasonic endoscope image fusion method and system

Similar Documents

Publication Publication Date Title
US11883118B2 (en) Using augmented reality in surgical navigation
KR101572487B1 (en) System and Method For Non-Invasive Patient-Image Registration
CN105025799B (en) Three-dimensional mapping display system for diagnostic ultrasound machine
EP1804705B1 (en) Aparatus for navigation and for fusion of ecographic and volumetric images of a patient which uses a combination of active and passive optical markers
CN102793563B (en) The flash of light figure of anatomical structure
US11642096B2 (en) Method for postural independent location of targets in diagnostic images acquired by multimodal acquisitions and system for carrying out the method
EP2618739B1 (en) Refinement of an anatomical model using ultrasound
JP2014217745A (en) Ultrasonic diagnostic apparatus and method of controlling the same
CN107527379B (en) Medical image diagnosis apparatus and medical image processing apparatus
JP2016539744A (en) Method and apparatus for providing blood vessel analysis information using medical images
CN115334973A (en) System and method for correlating regions of interest in multiple imaging modalities
JP4661688B2 (en) Optical biological measurement apparatus, optical biological measurement apparatus program, and optical biological measurement method
KR101993384B1 (en) Method, Apparatus and system for correcting medical image by patient's pose variation
JP5527841B2 (en) Medical image processing system
JP2013051998A (en) Ultrasonic diagnostic apparatus and control program for ultrasonic diagnostic apparatus
CN118236174B (en) Surgical assistance system, method, electronic device, and computer storage medium
CN106028943A (en) Ultrasonic virtual endoscopic imaging system and method, and apparatus thereof
KR20130110544A (en) The method and apparatus for indicating a medical equipment on an ultrasound image
CN118236174A (en) Surgical assistance system, method, electronic device, and computer storage medium
EP2451358B1 (en) Visualization of physiological parameters
US20200085411A1 (en) Method, apparatus and readable storage medium for acquiring an image
JP7299100B2 (en) ULTRASOUND DIAGNOSTIC DEVICE AND ULTRASOUND IMAGE PROCESSING METHOD
US20230017291A1 (en) Systems and methods for acquiring ultrasonic data
CN111658141B (en) Gastrectomy port position navigation system, gastrectomy port position navigation device and storage medium
US10258237B2 (en) Photobiomedical measurement apparatus

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant