CN116563154A - Medical imaging method and system - Google Patents

Medical imaging method and system Download PDF

Info

Publication number
CN116563154A
CN116563154A CN202310513424.7A CN202310513424A CN116563154A CN 116563154 A CN116563154 A CN 116563154A CN 202310513424 A CN202310513424 A CN 202310513424A CN 116563154 A CN116563154 A CN 116563154A
Authority
CN
China
Prior art keywords
model
image
style
output
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310513424.7A
Other languages
Chinese (zh)
Inventor
臧雯芸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN202310513424.7A priority Critical patent/CN116563154A/en
Publication of CN116563154A publication Critical patent/CN116563154A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5258Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Embodiments of the present disclosure provide medical imaging methods and systems, the methods comprising: acquiring imaging data; acquiring a first model and a second model, wherein the image processing styles of the first model and the second model are different; determining model style differences based on the first model and the second model; determining a first output image based on the first model and the imaging data; determining a second input image based on the first output image and the model style differences; a target image is determined based on the second model and the second input image.

Description

Medical imaging method and system
Technical Field
The present disclosure relates to the field of medical imaging, and in particular, to a medical imaging method and system.
Background
The existing medical imaging reconstruction technology mostly adopts a filtering back projection technology to reconstruct images. During the filtering process, the selection of the filter kernel determines the style of the generated tomographic image. Taking a CT apparatus as an example, after a CT tomographic image is generated, an AI algorithm is required to further process the CT tomographic image, for example, streak artifacts in the CT tomographic image are removed by the first AI algorithm. For another example, metal artifacts in CT tomograms are removed by a second AI algorithm. However, each AI algorithm may only support CT tomograms of a certain style, so the existing AI algorithms are all single use.
Therefore, it is desirable to provide a medical imaging method and system for linking at least two AI algorithms to process the same imaging data multiple times.
Disclosure of Invention
One of the embodiments of the present specification provides a medical imaging method, the method comprising: a medical imaging method, the method comprising: acquiring imaging data; acquiring a first model and a second model, wherein the first model and the second model are different in image processing style; determining a model style difference based on the first model and the second model; determining a first output image based on the first model and the imaging data; determining a second input image based on the first output image and the model style differences; a target image is determined based on the second model and the second input image.
In some embodiments, the model style differences include sharpness differences that manifest themselves as first filter functions corresponding to the first model being different than second filter functions corresponding to the second model.
In some embodiments, the acquiring imaging data comprises: the imaging data is determined by at least one filtering kernel based on data acquired by a detector of the medical device.
In some embodiments, the model style differences include filter kernel rejection rates of the first filter function and the second filter function.
In some embodiments, the determining a second input image based on the first output image and the model style differences comprises: obtaining a first output noise image and a first output noise removed image based on the first output image through a noise reduction model; converting the first output noise image to a frequency domain through fast Fourier transform to obtain noise generation data; convolving the noise generation data with the filter kernel suppression rate to obtain smooth noise generation data; converting the smooth noise generated data back to an image domain through inverse fast Fourier transform to obtain a smooth noise image; and adding the smoothed noise image to the first output noise-removed image as a result of the second input image.
In some embodiments, the model style differences are obtained by: processing the imaging data by using a first filter function to obtain a first style image; processing the imaging data by using a second filter function to obtain a second style image; and performing difference on the first style image and the second style image to obtain the model style difference.
In some embodiments, the method further comprises: determining the first output image based on the first model and the first style image; determining the second input image based on the first output image and the model style differences; the target image is determined based on the second model and the second input image.
One of the embodiments of the present specification provides a medical imaging system, the system comprising: the image acquisition module is used for acquiring imaging data; the model acquisition module is used for acquiring a first model and a second model, wherein the image processing styles of the first model and the second model are different; a difference determination module for determining model style differences based on the first model and the second model; a first processing module that determines a first output image based on the first model and the imaging data; a second processing module for determining a second input image based on the first output image and the model style differences; and a third processing module for determining a target image based on the second model and the second input image.
One of the embodiments of the present specification provides a medical imaging apparatus, the apparatus comprising: at least one storage medium storing computer instructions; at least one processor executing the computer instructions to implement the medical imaging method described above.
One of the embodiments of the present specification provides a computer-readable storage medium storing computer instructions that, when read by a computer, perform a medical imaging method as described above.
The beneficial effects of the embodiment of the specification at least comprise: (1) The medical imaging method is characterized in that by determining the image differences corresponding to two styles, and superposing the image differences on a first output image of a current model (for example, a first model) to obtain a target image, so as to obtain an input image of a next model (for example, a second model), the same CT data can be continuously processed by at least two AI algorithms. (2) The difference extraction model is generated by training a machine learning algorithm, so that the relation between data (such as filtering kernel functions corresponding to two types of styles, X-ray energy spectrum information, CT system hardware information, at least one of CT system software information and a reconstruction method, image differences and the like) in various dimensions can be mined, and the accuracy of determining the image differences is improved. (3) The first output noise image is converted into a frequency domain through fast Fourier transform to obtain noise generation data, the noise generation data and the filter kernel inhibition rate are convolved to obtain smooth noise generation data, the noise difference between the target image and the first output noise image is determined, and the result of adding the smooth noise image and the first output noise image is obtained, so that a more accurate target image can be obtained. (4) The corresponding image difference between the first style and the second style can be accurately and rapidly determined through the filter coefficient of the first filter kernel function and the filter coefficient of the second filter kernel function.
Drawings
The present specification will be further elucidated by way of example embodiments, which will be described in detail by means of the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is an application scenario diagram of an exemplary medical imaging system shown according to some embodiments of the present description;
FIG. 2 is a block diagram of an exemplary medical imaging device shown in accordance with some embodiments of the present description;
FIG. 3 is a flow chart of an exemplary medical imaging method shown in accordance with some embodiments of the present description;
FIG. 4 is a flow chart illustrating determining a second input image according to some embodiments of the present description;
FIG. 5 is a schematic diagram of multiple models for continuous processing of the same CT data, according to some embodiments of the present disclosure.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present specification, and it is possible for those of ordinary skill in the art to apply the present specification to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It will be appreciated that "system," "apparatus," "unit" and/or "module" as used herein is one method for distinguishing between different components, elements, parts, portions or assemblies at different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in this specification and the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
A flowchart is used in this specification to describe the operations performed by the system according to embodiments of the present specification. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
The existing medical imaging reconstruction technology mostly adopts a filtering back projection technology to reconstruct images. During the filtering process, the selection of the filter kernel determines the style of the generated tomographic image. Taking a CT apparatus as an example, after a CT tomographic image is generated, an AI algorithm is required to further process the CT tomographic image, for example, streak artifacts in the CT tomographic image are removed by the first AI algorithm. For another example, metal artifacts in CT tomograms are removed by a second AI algorithm. However, each AI algorithm may only support CT tomograms of a certain style, so the existing AI algorithms are all single use. The present specification describes a medical imaging method by taking CT equipment as an example, which can realize that at least two AI algorithms are continuously processed based on the same imaging data, and can rapidly obtain a relatively accurate target image. The present specification describes medical imaging methods and systems using CT apparatus as an example. It should be noted that the method provided in the present specification may be applied to, but not limited to, an electronic computer tomography apparatus, a magnetic resonance imaging apparatus, a positron emission tomography apparatus, and the like.
Fig. 1 is a schematic illustration of an application scenario of an exemplary medical imaging system shown in accordance with some embodiments of the present description. In some embodiments, as shown in fig. 1, medical imaging system 100 may include a processing device 110, a network 120, a user terminal 130, a storage device 140, and a CT device 150.
The processing device 110 may be used to process data from at least one component of the medical imaging system 100 or an external data source (e.g., a cloud data center). For example, the processing device 110 may acquire imaging data; acquiring a first model and a second model, wherein the image processing styles of the first model and the second model are different; determining model style differences based on the first model and the second model; determining a first output image based on the first model and the imaging data; determining a second input image based on the first output image and the model style differences; a target image is determined based on the second model and the second input image.
In some embodiments, the processing device 110 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a special instruction set processor (ASIP), an image processing unit (GPU), a physical arithmetic processing unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a microcontroller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like, or any combination thereof. In some embodiments, the processing device 110 may be a single server or a group of servers. In some embodiments, the processing device 110 may be local or remote. In some embodiments, the processing device 110 may be implemented on a cloud platform. For example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-layer cloud, or the like, or any combination thereof.
Network 120 may include any suitable network capable of facilitating the exchange of information and/or data by medical imaging system 100. In some embodiments, information and/or data may be exchanged between components of the medical imaging system 100 (e.g., the processing device 110, the user terminal 130, the storage device 140, and/or the CT device 150) via the network 120. For example, the processing device 110 may establish a connection with the user terminal 130 and/or the CT device 150 over the network 120. In some embodiments, network 120 may be any one or more of a wired network or a wireless network. In some embodiments, network 120 may include one or more network access points. For example, network 120 may include wired or wireless network access points, such as base stations and/or network switching points, through which one or more components of medical imaging system 100 may connect to network 120 to exchange data and/or information.
The user terminal 130 may be a terminal device used for a user (e.g., doctor, nurse, etc.). In some embodiments, the user terminal 130 may include a cell phone, tablet, computer, or the like. In some embodiments, the user terminal 130 may include a display component (e.g., a display screen), an interaction component (e.g., a mouse, keyboard, etc.), and so forth. In some embodiments, the user terminal 130 may interact with at least one component of the medical imaging system 100 or an external data source (e.g., a cloud data center). For example, the user terminal 130 may receive the second output image from the processing device 110.
Storage device 140 may be used to store data, instructions, and/or any other information. In some embodiments, the storage device 140 may store data and/or information acquired from at least one component of the medical imaging system 100 or an external data source. For example, the storage device 140 may store a first output image, a second input image, and a target image. In some embodiments, the storage device 140 may store data and/or instructions that the processing device 110 uses to perform or use to accomplish the exemplary methods described in this specification. For example, the storage device 140 may store medical imaging instructions for execution by the processing device 110.
In some embodiments, the storage device 140 may include mass storage, removable storage, or the like, or any combination thereof. In some embodiments, storage device 140 may be implemented on a cloud platform. For example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-layer cloud, or the like, or any combination thereof. In some embodiments, the storage device 140 may be integrated into the processing device 110 and/or the user terminal 130.
CT (Computed Tomography) device 150 can be used to scan a target object to obtain scan data for the target object. The target object may be biological or non-biological. For example, the target object may be a patient, an artificial object, or the like. The target object may comprise a specific part, organ, tissue and/or body part of the patient. For example only, the scan object may include a head, brain, neck, body, shoulder, arm, chest, heart, stomach, blood vessels, soft tissue, knee, foot, or the like, or a combination thereof. The CT apparatus 150 may include a gantry 151, a detector 152, a radiation source 153, and a scan bed 154. The detector 152 and the radiation source 153 may be mounted relatively on the gantry 151. The target object may be placed on the scan bed 154 and moved into the detection channel of the CT scanner 110. For ease of illustration, reference coordinate systems are incorporated, which may include an X-axis, a Y-axis, and a Z-axis. The Z-axis refers to the direction in which the target object is moved into and/or out of the detection channel of the CT apparatus 150. The X-axis and Y-axis may form a plane perpendicular to the Z-axis. The source 153 may emit X-rays to scan a target object located on a scanning couch 154. The target object may be a living body (e.g., patient, animal) or a non-living body (e.g., manikin, water film). The detector 152 may detect radiation (e.g., X-rays) emitted by the source 153. In some embodiments, the detector 152 may include a plurality of detector units. The detector unit may comprise a scintillation detector (e.g. cesium iodide detector) or a gas detector. The detector units may be arranged in a single row or in multiple rows. It should be noted that, in a suitable scenario, the CT apparatus 150 may be replaced with other medical apparatuses. For example, including but not limited to, an electron computer tomography apparatus, a magnetic resonance imaging apparatus, a positron emission tomography apparatus, and the like.
It should be noted that the above description of the medical imaging system 100 is provided for illustrative purposes only and is not intended to limit the scope of the present description. Many modifications and variations will be apparent to those of ordinary skill in the art in light of the present description. For example, the medical imaging system 100 may also include one or more other components, or one or more of the components described above may be omitted. However, such changes and modifications do not depart from the scope of the present specification.
Fig. 2 is a block diagram of an exemplary medical imaging system according to some embodiments of the present description. In some embodiments, the medical imaging system 200 may be implemented by the processing device 110. In some embodiments, as shown in fig. 2, the medical imaging system 200 may include an image acquisition module 210, a model acquisition module 220, a variance determination module 230, a first processing module 240, a second processing module 250, and a third processing module 260.
The image acquisition module 210 may be used to acquire imaging data.
The model acquisition module 220 may be configured to acquire a first model and a second model, wherein the first model and the second model differ in image processing style.
The variance determination module 230 may be configured to determine model style variances based on the first model and the second model.
The first processing module 240 may determine a first output image based on the first model and the imaging data.
The second processing module 250 may be configured to determine a second input image based on the first output image and the model style differences.
The third processing module 260 may be configured to determine a target image based on the second model and the second input image.
It should be noted that the above description of the medical imaging system 200 and its modules is for descriptive convenience only and is not intended to limit the present disclosure to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the principles of the system, various modules may be combined arbitrarily or a subsystem may be constructed in connection with other modules without departing from such principles. In some embodiments, the image acquisition module 210, the model acquisition module 220, the variance determination module 230, the first processing module 240, the second processing module 250, and the third processing module 260 disclosed in fig. 2 may be different modules in one system, or may be one module to implement the functions of two or more modules. For example, each module may share one memory module, or each module may have a respective memory module. Such variations are within the scope of the present description.
Fig. 3 is a flow chart of an exemplary medical imaging method shown in accordance with some embodiments of the present description. In some embodiments, the process 300 may be performed by the medical imaging system 100 (e.g., the processing device 110) or the medical imaging system 200. For example, the flow 300 may be stored in the storage device 140 in the form of a program or instructions that, when executed by the processing device 110 or the medical imaging system 200, may implement the flow 300. The operational schematic of the flow 300 presented below is illustrative. In some embodiments, the process may be accomplished with one or more additional operations not described above and/or one or more operations not discussed. In addition, the order in which the operations of flow 300 are illustrated in FIG. 3 and described below is not limiting. As shown in fig. 3, the process 300 may include the following steps.
At step 310, imaging data is acquired. In some embodiments, step 310 may be performed by image acquisition module 210.
The imaging data may reconstruct the resulting image. Such as CT tomographic images, MRI images, and the like.
In some embodiments, the image acquisition module 210 may reconstruct the scan data acquired by the CT device through a reconstruction algorithm corresponding to the first style, to obtain the first-style image.
In some embodiments, the image acquisition module 210 may determine the imaging data by at least one filtering kernel based on data acquired by a detector of the medical device. For example, the image acquisition module 210 may reconstruct data acquired by a detector of the medical device through a filter kernel corresponding to a first style, generating a first-style image as imaging data.
Step 320, a first model and a second model are obtained. In some embodiments, step 320 may be performed by model acquisition module 220.
The first model and the second model may be used for processing the imaging data/image. The first model and the second model may be machine learning models. Such as convolutional neural network models, and the like. In some embodiments, the first model and the second model may be preset image processing models. The first model and the second model may be acquired over a network. In some embodiments, the image processing styles of the first model and the second model are different. For example, a first model may correspond to a first style and a second model may correspond to a second style. In some embodiments, the style may include sharpness. Wherein sharpness is related to high frequency components in the CT image, sharp images reduce blurring in the image by increasing the high frequency components, increasing noise of the image while enhancing edges of the image. The smoothed image filters out high frequency components of the image, thereby reducing image noise, making the image somewhat blurred. The style may be expressed as the sharpness or blurriness of the image. In some embodiments, a style may correspond to at least one filtering function. For example, the first style may correspond to a high pass filter function and the second style may correspond to a low pass filter function. In some embodiments, the first-style image is applied to a first model that performs image processing on the first-style image based on a first AI algorithm. Different models may process CT tomograms of the corresponding style differently based on different AI algorithms. For example, the first model may perform a streak artifact removal process on a first style image based on a first AI algorithm, and the second model may perform a metal artifact removal process on a second style image based on a second AI algorithm.
Step 330, determining model style differences based on the first model and the second model. In some embodiments, step 330 may be performed by the variance determination module 230.
The model style differences may characterize differences in the processing effects of model processing on the imaging data. For example, the model style differences may be embodied as differences in the CT tomograms corresponding to the first style and the CT tomograms corresponding to the second style. The model style differences may include differences in noise between the CT tomograms corresponding to the first style and the CT tomograms corresponding to the second style.
In some embodiments, the model style differences may include sharpness differences that manifest themselves as first filter functions corresponding to the first model being different from second filter functions corresponding to the second model. For example, a first style may correspond to a high-pass filter function as a first filter function and a second style may correspond to a low-pass filter function as a second filter function, the first filter function and the second filter function being different.
In some embodiments, the model style differences may include filter kernel rejection rates of the first filter function and the second filter function. The filter kernel suppression ratio may characterize the style differences between the models, which may be calculated by prior art techniques based on the filter coefficients of the first filter function and the second filter function. For example, the filter kernel suppression ratio may be the ratio between the two functions. The filter kernel suppression ratio is generally expressed as a value between 0 and 1, and if the value is greater than 0 and less than 1, it indicates that there is a difference in the image style at that frequency. The larger the value, the larger the difference between the functions.
In some embodiments, the variance determination module 230 may determine the filter kernel suppression rate based on the filter coefficients of the first filter kernel function and the filter coefficients of the second filter kernel function.
For example, the variance determination module 230 may determine the filter kernel suppression rate based on the following formula:
uppression Rate=SOFT Filter/SHARP Filter;
wherein, the upcompression Rate is the Filter kernel suppression Rate, the SHARP Filter is the Filter coefficient of the first Filter kernel function, and the SOFT Filter is the Filter coefficient of the second Filter kernel function.
In some embodiments, the filter coefficients of the first filter kernel function and the filter coefficients of the second filter kernel function may be used to more accurately and quickly determine the corresponding image differences between the first style and the second style.
In some embodiments, the sharpness of the first-style image may be greater than the sharpness of the corresponding second-style image of the second style. The high frequency information in the style image with lower sharpness is filtered, and the style image with higher sharpness can not be obtained through processing, so that the sharpness of the first style image needs to be greater than that of the second style image corresponding to the second style.
In some embodiments, the difference determination module 230 may determine the image difference as the model style difference based on the first style and the second style corresponding to the first style image in any manner.
For example, the difference determination module 230 may determine the image difference based on the difference of the second style and the first style. For example only, the difference determination module 230 may perform a difference on the image domain of the sample first-style image corresponding to the first style and the sample second-style image corresponding to the second style, resulting in an image difference as the model style difference. As another example, the difference determining module 230 may perform a difference between the noise of the sample first-style image corresponding to the first style and the noise of the sample second-style image corresponding to the second style, to obtain the image difference as the model style difference.
In some embodiments, the image difference may be determined more quickly based on the difference between the second style and the first style. For example, the variance determination module 230 may process the imaging data using a first filter function to obtain a first style image; processing the imaging data by using a second filter function to obtain a second style image; and performing difference on the first style image and the second style image to obtain model style difference.
For another example, the variance determination module 230 may obtain a first output noise image and a first output noise-removed image based on the first output image by a noise reduction model; converting the first output noise image into a frequency domain through fast Fourier transform to obtain noise generation data; convoluting the noise generating data with the filter kernel suppression rate to obtain smooth noise generating data; converting the smooth noise generated data back to an image domain through inverse fast Fourier transform to obtain a smooth noise image; the result of adding the smoothed noise image to the first output noise-removed image is used as a second input image. For further description of determining the second input image, reference may be made to fig. 4 and its related description, which are not repeated here.
For another example, the difference determining module 230 may determine the image difference based on at least one of the two types of corresponding filter kernel functions, X-ray energy spectrum information, CT system hardware information, CT system software information, and reconstruction methods through a difference extraction model. For example only, the difference determination module 230 may determine the image difference based on at least one of the filter kernel, the X-ray energy spectrum information, the CT system hardware information, the software information, and the reconstruction method corresponding to the first style and at least one of the filter kernel, the X-ray energy spectrum information, the CT system hardware information, the software information, and the reconstruction method corresponding to the second style by the difference extraction model.
The X-ray energy spectrum information may include at least X-ray wavelength and/or frequency.
The CT system hardware information may be information related to the hardware of a CT device (e.g., CT device 150). For example, the CT system hardware information may include information related to an X-ray generator, a filter, a collimator, a detector, and/or an analog-to-digital converter of the CT device. For example only, the CT system hardware information may include at least one or more of the number of detector units per row, the number of effective channels per layer of the detector, the target material, the tube voltage, the tube current, the exposure time, the effective bulb heat capacity, the effective high voltage generator power, and the like. For another example, CT system hardware information may also include detector type information, such as Photon Counting Detectors (PCDs), energy Integrating Detectors (EIDs), and the like. The detector type information may further include a scanning mode of each detector, such as a macro mode (macro), a high resolution mode (HR), an ultra high resolution mode (UHR), etc. of the photon counting detector. Also for example, the CT system hardware information may also include scan mode information, such as Axial (Axial), helical (spiral), topological (Topo), and the like.
The CT system software information may include at least information such as filters in the image field from the imaging data. For example, the CT system software information may include filter kernels in a back-projection reconstruction algorithm, or beam filters in a de-loop algorithm, etc.
The reconstruction method may include at least a back projection reconstruction algorithm (FBP), a model-based iterative reconstruction algorithm (MBIR), a deep learning-based reconstruction algorithm, a deep learning and iterative fusion reconstruction algorithm, and the like.
In some embodiments, the difference determining module 230 may obtain, through the processing device 110, the user terminal 130, the storage device 140, the CT device 150, and/or the external data source, at least one of a filter kernel corresponding to the first style, X-ray spectrum information, CT system hardware information, software information, and a reconstruction method, and at least one of a filter kernel corresponding to the second style, X-ray spectrum information, CT system hardware information, software information, and a reconstruction method.
In some embodiments, the difference determination module 230 may determine the filter kernel difference based on a first filter kernel corresponding to the first style and a second filter kernel corresponding to the second style, may determine the X-ray energy spectrum difference based on X-ray energy spectrum information corresponding to the first style and X-ray energy spectrum information corresponding to the second style, may determine the CT system hardware difference based on CT system hardware information corresponding to the first style and CT system hardware information corresponding to the second style, may determine the CT system software difference based on software information corresponding to the first style and software information corresponding to the second style, and may determine the reconstruction method difference based on a reconstruction method corresponding to the first style and a reconstruction method corresponding to the second style.
In some embodiments, step 330 may be implemented by a difference extraction model. The difference extraction model may be a machine learning model for determining an image difference based on at least one of a filter kernel function, X-ray energy spectrum information, CT system hardware information, CT system software information, and a reconstruction method, and the input of the difference extraction model may include at least one of a filter kernel function, X-ray energy spectrum information, CT system hardware information, software information, and a reconstruction method corresponding to two styles. For example, the difference extraction model may include at least one of filter kernel differences, X-ray energy spectrum differences, CT system hardware differences, CT system software differences, and/or reconstruction method differences.
In some embodiments, the variance determination module 230 may train the initial variance extraction model with at least two first training samples to generate a trained variance extraction model, wherein the first training samples may include at least one of a filter kernel variance, an X-ray spectrum variance, a CT system hardware variance, a CT system software variance, and/or a reconstruction method variance between a first sample style and a second sample style, and the labels of the first training samples may include image variances between the first sample style and the second sample style, wherein the labels of the first training samples may be determined from the variances between the images corresponding to the first sample style and the images corresponding to the second sample style. The difference determining module 230 may train the initial difference extraction model through at least two first training samples and the corresponding labels thereof until the initial difference extraction model meets a preset condition, thereby obtaining a trained difference extraction model. In some embodiments, the difference determining module 230 may iteratively update the parameters of the initial difference extraction model using at least two first training samples to make the initial difference extraction model satisfy a preset condition, where the preset condition may be that the loss function converges, the loss function value is smaller than a preset value, or the iteration number is greater than a preset number, etc.
In some embodiments, the difference extraction model may include, but is not limited to, a Neural Network (NN), a Convolutional Neural Network (CNN), a Deep Neural Network (DNN), a Recurrent Neural Network (RNN), etc., or any combination thereof, e.g., the difference extraction model may be a model formed by a combination of a convolutional neural network and a deep neural network.
In some embodiments, after training is completed, the difference determining module 230 may input at least one of a filter kernel corresponding to the first style, X-ray energy spectrum information, CT system hardware information, software information, and a reconstruction method, and at least one of a filter kernel corresponding to the second style, X-ray energy spectrum information, CT system hardware information, software information, and a reconstruction method to the difference extraction model, where the difference extraction model outputs image differences corresponding to the first style and the second style.
In some embodiments, after training is completed, the difference determining module 230 determines an image difference through the trained difference extraction model, superimposes the image difference on the first output image, obtains a superimposed image, uses the superimposed image as an input of the second model, obtains a second output image, and can evaluate whether the second output image meets the image requirement through the effect evaluation model, and determines whether the difference extraction model needs to be adjusted again. The effect evaluation model may be a machine learning model for evaluating whether the second output image meets the image requirement, and the input of the effect evaluation model may include the second output image output by the second model, and the output of the effect evaluation model may include an evaluation result for determining whether the second output image meets the image requirement.
In some embodiments, the image requirements may include at least one of noise requirements, color requirements, shading requirements, texture requirements, and the like.
In some embodiments, the variance determining module 230 may train the variance extraction model again when the effect evaluation model evaluates that the second output image does not meet the image requirements.
In some embodiments, a difference extraction model is generated by training a machine learning algorithm, so that the relation between data (such as filtering kernel functions corresponding to two types, X-ray energy spectrum information, CT system hardware information, at least one of CT system software information and a reconstruction method, image differences and the like) of various dimensions can be mined, and the accuracy of determining the image differences is improved.
Step 340, determining a first output image based on the first model and the imaging data. In some embodiments, step 330 may be performed by the first processing module 240.
The first output image may be a first style image. In some embodiments, the first processing module 240 may input the first style image into a first model that generates a first output image after image processing the first style image based on a first AI algorithm. For example, the first processing module 240 may input the first style image into a first model, and the first model may perform a de-striping process on the first style image based on a first AI algorithm to obtain a de-striped first style image.
A second input image is determined based on the first output image and the model style differences, step 350. In some embodiments, step 350 may be performed by the second processing module 250.
The second input image may be an input image of a second model. For example, the second input image may be the first output image itself, or an image obtained by superimposing an image difference/model style difference on the first output image.
Step 360, determining a target image based on the second model and the second input image. In some embodiments, step 360 may be performed by the third processing module 260.
In some embodiments, the third processing module 260 may superimpose the image differences/model style differences onto the first output image in any manner to obtain the target image. The target image may be a final processed image. For example, the target image may be an obtained noise-free image.
For example, the third processing module 260 may superimpose the image difference onto the first output image through an image generation model, which may be a machine learning model that generates the target image based on the image difference and the output image, the input of the image generation model may include the image difference and the output image, and the output of the image generation model may include the target image. In some embodiments, the image generation model may include a generative antagonism network (Generative adversarial network, GAN).
In some embodiments, the third processing module 260 may construct an initial image generation model in advance, where the initial image generation model includes a generator and a discriminator, and then train the initial image generation model with at least two second training samples, where the second training samples may include a sample image difference, a sample first output image, and a sample target image, an input of the generator of the initial image generation model is the sample image difference and the sample first output image, an output of the generator of the initial image generation model is a virtual target image, an input of the discriminator is the sample target image and the virtual target image output by the generator, and compare the sample target image and the virtual target image by the discriminator, and determine a probability that the virtual target image is virtually generated by the generator. Based on the judging result of the discriminator, the method feeds back to the generator through a back propagation algorithm, guides the generator to generate a more real virtual target image, and improves the discriminating capability of the discriminator. And performing iterative training through the loss function, wherein the two are mutually opposed until the virtual target image generated by the generator enables the discriminator to be incapable of distinguishing the sample target image and the virtual target image, and then the Nash equilibrium state is achieved or the iteration times reach a threshold value, so that training of an initial image generation model is completed, and an image generation model is obtained.
In some embodiments, the third processing module 260 may input the second input image to the second model, which generates the second output image as the target image after image processing the second input image based on the second AI algorithm. For example, the third processing module 260 may input the second input image into a second model, and the second model may perform a metal artifact removal process on the second input image based on a second AI algorithm to obtain a metal artifact removed second input image.
Fig. 5 is a schematic diagram illustrating continuous processing of the same CT data by multiple models according to some embodiments of the present disclosure, as shown in fig. 5, in some embodiments, the process 300 may be applicable to a scenario in which multiple models (e.g., the first model, the second model, the third model … …, the nth model) continuously process the same CT data. After the second model outputs the second output image, the processing device 110 or the medical imaging system 200 may determine an image difference based on the second style and the third style, superimpose the image difference on the second output image to obtain a second target image, use the second target image as an input of the third model to obtain the third output image, wherein the third model performs image processing on the target image based on the third AI algorithm, repeat the above operations until the image difference is determined based on the N-1 th style and the N-1 th style, superimpose the image difference on the N-1 th output image to obtain the N-1 th target image, use the N-1 th target image as an input of the N-th model to obtain the N-th output image, and wherein the N-th model performs image processing on the target image based on the N-th AI algorithm. The sharpness of the ith style is smaller than that of the ith-1 style, and i is a natural number which is larger than 1 and smaller than or equal to N.
In some embodiments, the processing device 110 or the medical imaging system 200 may also determine the image difference directly based on the jth and mth styles, superimpose the image difference on the jth output image output by the jth model to obtain the jth target image, and take the jth target image as an input of the mth model to obtain the mth output image. Wherein the sharpness of the m-th style is less than the sharpness of the j-th style.
In some embodiments, the medical imaging method obtains the target image by determining the image differences corresponding to the two styles, superimposing the image differences on the first output image of the current model (e.g., the first model), thereby obtaining the input image of the next model (e.g., the second model), enabling the same CT data to be continuously processed by at least two AI algorithms.
It should be noted that the description of the above related flow 300 is only for illustration and description, and does not limit the application scope of the present disclosure. Various modifications and changes to flow 300 will be apparent to those skilled in the art in light of the present description. However, such modifications and variations are still within the scope of the present description.
Fig. 4 is a flow chart illustrating determining a second input image according to some embodiments of the present description. In some embodiments, the process 400 may be performed by the medical imaging system 100 (e.g., the processing device 110) or the medical imaging system 200. For example, the flow 400 may be stored in the storage device 140 in the form of a program or instructions that, when executed by the processing device 110 or the medical imaging system 200, may implement the flow 400. The operational schematic of the flow 400 presented below is illustrative. In some embodiments, the process may be accomplished with one or more additional operations not described above and/or one or more operations not discussed. In addition, the order in which the operations of flowchart 400 are illustrated in FIG. 4 and described below is not limiting. As shown in fig. 4, the process 400 may include the following steps.
At step 410, a first output noise image and a first output noise-removed image are obtained based on the first output image by a noise reduction model.
The first output noise image may be an image of noise in the first output image. The first output denoised image may be a denoised image of the first output image.
The noise reduction model may be a machine learning model that processes the output image to obtain an output noise image and an output noise-removed image. The input of the noise reduction model may include an output image, and the output of the noise reduction model may include an output noise image and an output denoised image corresponding to the output image. The variance determination module 230 may train the initial noise reduction model with at least two third training samples to generate a trained noise reduction model, where the third training samples may include sample output images and the labels of the third training samples may include sample output noise images and sample output noise removed images corresponding to the sample output images. In some embodiments, the structure and training of the noise reduction model are similar to those of the difference extraction model, and description about the structure and training of the noise reduction model may be referred to as description about the structure and training of the difference extraction model, which is not repeated herein.
Step 420, the first output noise image is transformed into the frequency domain through fast fourier transform, and noise generated data is obtained.
Step 430, convolving the noise generated data with the filter kernel suppression ratio to obtain smoothed noise generated data.
Step 440, the smooth noise generated data is transformed back into the image domain by inverse fast fourier transform to obtain a smooth noise image.
Step 450, adding the smoothed noise image to the first output noise-removed image as the target image. By the above process, a more accurate target image can be obtained.
It should be noted that the description of the above related flow 400 is only for illustration and description, and does not limit the application scope of the present disclosure. Various modifications and changes to flow 400 will be apparent to those skilled in the art in light of the present description. However, such modifications and variations are still within the scope of the present description.
The beneficial effects of the embodiment of the specification at least comprise: (1) The medical imaging method is characterized in that by determining the image differences corresponding to two styles, and superposing the image differences on a first output image of a current model (for example, a first model) to obtain a target image, so as to obtain an input image of a next model (for example, a second model), the same CT data can be continuously processed by at least two AI algorithms. (2) The difference extraction model is generated by training a machine learning algorithm, so that the relation between data (such as filtering kernel functions corresponding to two types of styles, X-ray energy spectrum information, CT system hardware information, at least one of CT system software information and a reconstruction method, image differences and the like) in various dimensions can be mined, and the accuracy of determining the image differences is improved. (3) The first output noise image is converted into a frequency domain through fast Fourier transform to obtain noise generation data, the noise generation data and the filter kernel inhibition rate are convolved to obtain smooth noise generation data, the noise difference between the target image and the first output noise image is determined, and the result of adding the smooth noise image and the first output noise image is obtained, so that a more accurate target image can be obtained. (4) The corresponding image difference between the first style and the second style can be accurately and rapidly determined through the filter coefficient of the first filter kernel function and the filter coefficient of the second filter kernel function.
The schemes described in the present specification and embodiments, if related to personal information processing, all perform processing on the premise of having a validity base (for example, obtaining agreement of a personal information body, or being necessary for executing a contract, etc.), and perform processing only within a prescribed or agreed range. The user refuses to process the personal information except the necessary information of the basic function, and the basic function is not influenced by the user.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations to the present disclosure may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this specification, and therefore, such modifications, improvements, and modifications are intended to be included within the spirit and scope of the exemplary embodiments of the present invention.
Meanwhile, the specification uses specific words to describe the embodiments of the specification. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present description. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present description may be combined as suitable.
Furthermore, the order in which the elements and sequences are processed, the use of numerical letters, or other designations in the description are not intended to limit the order in which the processes and methods of the description are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present disclosure. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed in this specification and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the present description. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., referred to in this specification is incorporated herein by reference in its entirety. Except for application history documents that are inconsistent or conflicting with the content of this specification, documents that are currently or later attached to this specification in which the broadest scope of the claims to this specification is limited are also. It is noted that, if the description, definition, and/or use of a term in an attached material in this specification does not conform to or conflict with what is described in this specification, the description, definition, and/or use of the term in this specification controls.
Finally, it should be understood that the embodiments described in this specification are merely illustrative of the principles of the embodiments of this specification. Other variations are possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present specification may be considered as consistent with the teachings of the present specification. Accordingly, the embodiments of the present specification are not limited to only the embodiments explicitly described and depicted in the present specification.

Claims (10)

1. A medical imaging method, the method comprising:
acquiring imaging data;
acquiring a first model and a second model, wherein the first model and the second model are different in image processing style;
determining a model style difference based on the first model and the second model;
determining a first output image based on the first model and the imaging data;
determining a second input image based on the first output image and the model style differences;
a target image is determined based on the second model and the second input image.
2. The method of claim 1, wherein the model style differences comprise sharpness differences that manifest themselves as first filter functions corresponding to the first model being different from second filter functions corresponding to the second model.
3. The method of claim 2, wherein the acquiring imaging data comprises:
the imaging data is determined by at least one filtering kernel based on data acquired by a detector of the medical device.
4. The method of claim 2, wherein the model style differences comprise filter kernel rejection rates of the first filter function and the second filter function.
5. The method of claim 4, wherein the determining a second input image based on the first output image and the model style differences comprises:
obtaining a first output noise image and a first output noise removed image based on the first output image through a noise reduction model;
converting the first output noise image to a frequency domain through fast Fourier transform to obtain noise generation data;
convolving the noise generation data with the filter kernel suppression rate to obtain smooth noise generation data;
converting the smooth noise generated data back to an image domain through inverse fast Fourier transform to obtain a smooth noise image;
and adding the smoothed noise image to the first output noise-removed image as a result of the second input image.
6. The method according to claim 2, wherein the model style differences are obtained by:
processing the imaging data by using a first filter function to obtain a first style image;
processing the imaging data by using a second filter function to obtain a second style image;
and performing difference on the first style image and the second style image to obtain the model style difference.
7. The method of claim 6, wherein the method further comprises:
determining the first output image based on the first model and the first style image;
determining the second input image based on the first output image and the model style differences;
the target image is determined based on the second model and the second input image.
8. A medical imaging system, the system comprising:
the image acquisition module is used for acquiring imaging data;
the model acquisition module is used for acquiring a first model and a second model, wherein the image processing styles of the first model and the second model are different;
a difference determination module for determining model style differences based on the first model and the second model;
A first processing module that determines a first output image based on the first model and the imaging data;
a second processing module for determining a second input image based on the first output image and the model style differences;
and a third processing module for determining a target image based on the second model and the second input image.
9. A medical imaging device, the device comprising:
at least one storage medium storing computer instructions;
at least one processor executing the computer instructions to implement the method of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions which, when read by a computer, perform the method of any one of claims 1 to 7.
CN202310513424.7A 2023-05-08 2023-05-08 Medical imaging method and system Pending CN116563154A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310513424.7A CN116563154A (en) 2023-05-08 2023-05-08 Medical imaging method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310513424.7A CN116563154A (en) 2023-05-08 2023-05-08 Medical imaging method and system

Publications (1)

Publication Number Publication Date
CN116563154A true CN116563154A (en) 2023-08-08

Family

ID=87489285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310513424.7A Pending CN116563154A (en) 2023-05-08 2023-05-08 Medical imaging method and system

Country Status (1)

Country Link
CN (1) CN116563154A (en)

Similar Documents

Publication Publication Date Title
CN110751702B (en) Image reconstruction method, system, device and storage medium
EP1522045B1 (en) Motion artifact correction of tomographical images
CN111540025B (en) Predicting images for image processing
US11790578B2 (en) System and method for computed tomography
KR20130069506A (en) Image processing apparatus, image processing method, and computer-readable storage medium
JP2021013725A (en) Medical apparatus
CN103218803A (en) Computed-tomography system and method for determining volume information for a body
US10013778B2 (en) Tomography apparatus and method of reconstructing tomography image by using the tomography apparatus
Zhang et al. PET image reconstruction using a cascading back-projection neural network
KR102472464B1 (en) Image Processing Method and Image Processing Device using the same
US20200240934A1 (en) Tomography apparatus and controlling method for the same
US20220414832A1 (en) X-ray imaging restoration using deep learning algorithms
CN112204607B (en) Scattering correction for X-ray imaging
US20230363724A1 (en) X-ray ct apparatus and high-quality image generation device
Taubmann et al. Assessing cardiac function from total-variation-regularized 4D C-arm CT in the presence of angular undersampling
CN116563154A (en) Medical imaging method and system
US20190180481A1 (en) Tomographic reconstruction with weights
CN112862722B (en) Dual-energy X-ray subtraction method and device
US11270477B2 (en) Systems and methods for tailored image texture in iterative image reconstruction
Gomi et al. Development of a denoising convolutional neural network-based algorithm for metal artifact reduction in digital tomosynthesis for arthroplasty: A phantom study
JP2022027382A (en) Information processing method, medical image diagnostic apparatus and information processing system
WO2016186746A1 (en) Methods and systems for automatic segmentation
Vizitiu et al. Data-driven adversarial learning for sinogram-based iterative low-dose CT image reconstruction
CN113168721B (en) System for reconstructing an image of an object
Kumar et al. 3D Volumetric Computed Tomography from 2D X-Rays: A Deep Learning Perspective

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination