CN111462273B - Image processing method, device, CT equipment and CT system - Google Patents

Image processing method, device, CT equipment and CT system Download PDF

Info

Publication number
CN111462273B
CN111462273B CN202010409414.5A CN202010409414A CN111462273B CN 111462273 B CN111462273 B CN 111462273B CN 202010409414 A CN202010409414 A CN 202010409414A CN 111462273 B CN111462273 B CN 111462273B
Authority
CN
China
Prior art keywords
image
matrix
gradient
target
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010409414.5A
Other languages
Chinese (zh)
Other versions
CN111462273A (en
Inventor
黄建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Medical Systems Co Ltd
Original Assignee
Neusoft Medical Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Medical Systems Co Ltd filed Critical Neusoft Medical Systems Co Ltd
Priority to CN202010409414.5A priority Critical patent/CN111462273B/en
Publication of CN111462273A publication Critical patent/CN111462273A/en
Application granted granted Critical
Publication of CN111462273B publication Critical patent/CN111462273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method, an image processing device, CT equipment and a CT system. In the embodiment of the invention, a first image is obtained by reconstructing CT (computed tomography) scanning raw data based on a first convolution kernel, a second image is obtained by reconstructing CT scanning raw data based on a second convolution kernel, the first convolution kernel is a convolution kernel for generating a high-resolution image, the second convolution kernel is a convolution kernel for generating a low-resolution image, a weight matrix is determined according to the first image, and a target image is obtained according to the first image, the second image and the weight matrix. And according to the gradient mapping weighting weight, the high-resolution CT image and the low-resolution CT image are fused based on the weight, so that the resolution is ensured, meanwhile, the high-gradient artifact is effectively weakened, and the image quality is improved.

Description

Image processing method, device, CT equipment and CT system
Technical Field
The present invention relates to the field of medical image processing technologies, and in particular, to an image processing method, an image processing device, a CT apparatus, and a CT system.
Background
CT (Computed Tomography) imaging has been used in modern medical testing in a wider and wider range of applications due to the clear cross-sectional images and high density resolution. The CT image resolution requirements for diagnosis are different for different tissues or regions, and there are high resolution and low resolution. While fine tissue structures (e.g., inner ear bone tissue, pulmonary nodules, etc.) require high resolution images for accurate diagnosis, typical soft tissues require only low noise resolution images for diagnosis.
In high resolution CT images, high gradient artifacts can occur at edges (or CT value jumps) in the image. The presence of high gradient artifacts results in reduced image quality, affecting the diagnostic effect.
Disclosure of Invention
In order to overcome the problems in the related art, the invention provides an image processing method, an image processing device, a CT device and a CT system, and the image quality is improved.
According to a first aspect of an embodiment of the present invention, there is provided an image processing method including:
reconstructing CT scanning raw data based on a first convolution kernel to obtain a first image, and reconstructing CT scanning raw data based on a second convolution kernel to obtain a second image, wherein the first convolution kernel is a convolution kernel for generating a high-resolution image, and the second convolution kernel is a convolution kernel for generating a low-resolution image;
determining a weight matrix according to the first image;
and obtaining a target image according to the first image, the second image and the weight matrix.
According to a second aspect of an embodiment of the present invention, there is provided an image processing apparatus including:
the reconstruction module is used for reconstructing CT scanning raw data based on a first convolution kernel to obtain a first image, reconstructing CT scanning raw data based on a second convolution kernel to obtain a second image, wherein the first convolution kernel is a convolution kernel for generating a high-resolution image, and the second convolution kernel is a convolution kernel for generating a low-resolution image;
The determining module is used for determining a weight matrix according to the first image;
and the fusion module is used for obtaining a target image according to the first image, the second image and the weight matrix.
According to a third aspect of embodiments of the present invention, there is provided a CT apparatus comprising: an internal bus, and a memory, a processor and an external interface connected through the internal bus; the external interface is used for being connected with a detector of the CT system, and the detector comprises a plurality of detector chambers and corresponding processing circuits;
the memory is used for storing machine-readable instructions corresponding to the image processing control logic;
the processor is configured to read the machine-readable instructions on the memory and perform operations comprising:
reconstructing CT scanning raw data based on a first convolution kernel to obtain a first image, and reconstructing CT scanning raw data based on a second convolution kernel to obtain a second image, wherein the first convolution kernel is a convolution kernel for generating a high-resolution image, and the second convolution kernel is a convolution kernel for generating a low-resolution image;
determining a weight matrix according to the first image;
and obtaining a target image according to the first image, the second image and the weight matrix.
According to a fourth aspect of embodiments of the present invention, there is provided a CT system comprising a detector, a scan bed and a CT apparatus, the detector comprising a plurality of detector cells and corresponding processing circuitry; wherein:
the detector chamber is used for detecting X-rays passing through a scanning object and converting the X-rays into electric signals in the scanning process of the CT system;
the processing circuit is used for converting the electric signal into a pulse signal and collecting energy information of the pulse signal;
the CT device is used for:
reconstructing CT scanning raw data based on a first convolution kernel to obtain a first image, and reconstructing CT scanning raw data based on a second convolution kernel to obtain a second image, wherein the first convolution kernel is a convolution kernel for generating a high-resolution image, and the second convolution kernel is a convolution kernel for generating a low-resolution image;
determining a weight matrix according to the first image;
and obtaining a target image according to the first image, the second image and the weight matrix.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
in the embodiment of the invention, a first image is obtained by reconstructing CT (computed tomography) scanning raw data based on a first convolution kernel, a second image is obtained by reconstructing CT scanning raw data based on a second convolution kernel, the first convolution kernel is a convolution kernel for generating a high-resolution image, the second convolution kernel is a convolution kernel for generating a low-resolution image, a weight matrix is determined according to the first image, and a target image is obtained according to the first image, the second image and the weight matrix. And according to the gradient mapping weighting weight, the high-resolution CT image and the low-resolution CT image are fused based on the weight, so that the resolution is ensured, meanwhile, the high-gradient artifact is effectively weakened, and the image quality is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the specification and together with the description, serve to explain the principles of the specification.
Fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present invention.
Fig. 2 is a diagram showing a comparative example of an original high-resolution CT image and a CT image obtained by the image processing method according to the present embodiment.
Fig. 3 is a functional block diagram of an image processing apparatus according to an embodiment of the present invention.
Fig. 4 is a hardware configuration diagram of a CT apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of embodiments of the invention as detailed in the accompanying claims.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments of the invention only and is not intended to be limiting of embodiments of the invention. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present invention to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of embodiments of the present invention. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In the field of CT imaging, images of ultimately different resolutions are typically produced by convolution kernels of different shapes. The convolution kernels of different shapes actually enhance the respective frequency components of the image by different magnitudes to obtain images of different resolutions. If the high frequency component of the image is enhanced more, the resolution of the image is high, but the background noise is large; if the high frequency component of the image is not enhanced or even suppressed, the resolution of the obtained image is low, but the noise is small.
In a high-resolution CT image, since the high-resolution convolution kernel enhances a high-frequency component, it is essential that the difference between the CT value of the current pixel point and the surrounding CT value becomes large. This causes high gradient artifacts at edges or CT value jumps in the image. High gradient artifacts refer to the reversal of gradients where high gradients occur in an image, also known as gradient reversal artifacts (gradient reversal artifacts). The low resolution CT image is not over-enhanced and therefore there are no high gradient artifacts in the image.
The image processing method is described in detail by examples below.
Fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present invention. As shown in fig. 1, in the present embodiment, the image processing method may include:
s101, reconstructing CT scanning raw data based on a first convolution kernel to obtain a first image, and reconstructing CT scanning raw data based on a second convolution kernel to obtain a second image, wherein the first convolution kernel is a convolution kernel for generating a high-resolution image, and the second convolution kernel is a convolution kernel for generating a low-resolution image.
S102, determining a weight matrix according to the first image.
And S103, obtaining a target image according to the first image, the second image and the weight matrix.
In this embodiment, the first convolution kernel is a convolution kernel that produces a high resolution image, referred to herein as a high resolution convolution kernel. The second convolution kernel is the convolution kernel that produces the low resolution image, referred to herein as the low resolution convolution kernel.
Correspondingly, the first image is a high-resolution image, and the first image contains high-gradient artifacts; the second image is a low resolution image, and the second image does not contain high gradient artifacts or has no obvious high gradient artifacts. Here, the absence of significant high gradient artifacts means that the high gradient artifacts contained in the image are within a preset allowable range. For example, in the visual perception of the user, no "black edges" appear at the transition between tissues, which can be seen as a situation without significantly high gradient artifacts.
In this embodiment, each element in the weight matrix is associated with a gradient of pixel values of a corresponding pixel point in the first image. The elements in the weight matrix are weight values. The weight matrix may also be referred to as a weight map. The pixel values in the weight map are weight values.
For example, assuming that the size of the first image is 1024×1024 pixels, the weight matrix is a 1024-row 1024-column matrix accordingly. The element of the ith (I is a natural number, I is more than or equal to 0 and less than or equal to 1023) row and the jth (j is a natural number, j is more than or equal to 0 and less than or equal to 1023) column in the weight matrix is related to the gradient of the pixel value I (I, j) of the pixel point (I, j) in the first image.
In an exemplary implementation process, in step S102, determining a weight matrix according to the first image may include:
acquiring a target gradient matrix corresponding to the first image according to the first image;
and determining a weight matrix according to the target gradient matrix.
In this embodiment, the target gradient matrix reflects the change condition of the CT value, so that it is known, according to the target gradient matrix, which pixels in the first image have CT value jumps, and the pixel values of the pixels having the CT value jumps are affected by the high gradient artifact, so that the corresponding weight value can be set in a targeted manner according to the target gradient matrix corresponding to the first image to weaken the high gradient artifact.
Wherein the elements in the gradient matrix are gradient values. The gradient matrix may also be referred to as a gradient map. The pixel values in the gradient map are gradient values.
In an exemplary implementation process, according to the first image, acquiring a target gradient matrix corresponding to the first image may include:
if the size of the imaging matrix of the first image is smaller than or equal to the size of the preset imaging matrix and the imaging field of view of the first image is larger than or equal to the preset imaging field of view, directly acquiring a gradient matrix of the first image to serve as a target gradient matrix.
Since the first image is a high resolution image, the first image is denoted herein as I high . Similarly, since the second image is a low resolution image, the second image is denoted herein as I low
Setting a first image I high The imaging matrix size is ImageWidth, the imaging field of View is FOV (field of View), the preset imaging matrix size is ImageWidth Std, and the preset imaging field of View is FOVStd, when ImageWidth<=imagewidthstd, and FOV>When=fovstd, the gradient matrix of the first image is directly calculated as the target gradient matrix without performing a reduction operation on the first image.
In an exemplary implementation process, according to the first image, acquiring a target gradient matrix corresponding to the first image may include:
if the size of the imaging matrix of the first image is larger than the preset imaging matrix size or the imaging view of the first image is smaller than the preset imaging view, reducing the first image into an intermediate image;
acquiring a gradient matrix corresponding to the intermediate image as an intermediate gradient matrix;
and amplifying the intermediate gradient matrix to obtain a target gradient matrix of the first image.
I.e. when ImageWidth > ImageWidth std or FOV < FOVStd, the first image is first reduced, then the gradient matrix of the reduced image is calculated, and then the gradient matrix is enlarged to obtain the target gradient matrix.
In this embodiment, the image is first reduced and then the gradient matrix is calculated, so that the calculation amount can be reduced and the processing speed can be increased.
In an exemplary implementation, the reducing the first image to an intermediate image may include:
determining a scaling factor according to the size of the imaging matrix of the first image, the size of the preset imaging matrix, the imaging view of the first image and the preset imaging view;
determining the size of an imaging matrix of the intermediate image according to the size of the imaging matrix of the first image and the scaling factor;
reducing the first image into an intermediate image according to the size of the imaging matrix of the first image and the size of the imaging matrix of the intermediate image;
amplifying the intermediate gradient matrix to obtain a target gradient matrix of the first image, including:
and amplifying the intermediate gradient matrix into a target gradient matrix according to the size of the imaging matrix of the first image and the size of the imaging matrix of the intermediate image.
In this embodiment, a nearest neighbor scaling method may be used. Assuming that the scaling factor is λ, λ can be calculated by the following formula (1):
let the size of the imaging matrix of the intermediate image be ImageWidth new, and the size of the imaging matrix of the first image be ImageWidth: imagewidthnew=imagewidth λ.
After obtaining the gradient matrix GradImageNew of ImageWidthNew, the gradient matrix GradImageNew can be scaled into a target gradient matrix gradimagewith the imaging matrix size of ImageWidth by adopting a nearest neighbor scaling mode, and the scaling factor is 1/lambda.
In an exemplary implementation, acquiring the gradient matrix corresponding to the image may include:
acquiring gradient matrixes in at least two directions according to the image, and taking the gradient matrixes as initial gradient matrixes;
and determining a gradient matrix corresponding to the image according to the acquired at least two initial gradient matrices.
For example. Gradient matrices of 4 directions, which are left and right, up and down, left and up and down, right and up and down, respectively, can be calculated, and correspondingly, gradient matrices are GradImage1, gradImage2, gradImage3, and GradImage4, respectively. In this way, the target gradient matrix GradImage can be determined from GradImage1, gradImage2, gradImage3, and GradImage4.
In this embodiment, the image gradient may be calculated using an isotropic uniform gradient calculation method.
The target gradient matrix corresponding to the first image and the intermediate gradient matrix can be obtained by obtaining the gradient matrix corresponding to the image in this embodiment.
In one exemplary implementation, acquiring a gradient matrix in at least two directions from an image may include:
for each direction in the at least two directions, acquiring a mask matrix corresponding to the direction;
convolving the image with the mask matrix to obtain a result matrix;
and taking an absolute value of each element in the result matrix to obtain a gradient matrix in the direction.
For example, mask matrices corresponding to the 4 directions of left-right, up-down, left-up-right, right-up-down, and right-up-left-down are respectively as follows:
the absolute value is taken to ensure that the gradient values are all positive numbers. The gradient value only represents the variation of the CT value and does not contain direction information.
In an exemplary implementation process, determining a gradient matrix corresponding to the image according to the acquired at least two initial gradient matrices may include:
for each element on the gradient matrix corresponding to the image, acquiring a gradient value of the corresponding element on each initial gradient matrix to obtain at least two gradient values;
and determining the values of the elements of the gradient matrix corresponding to the image according to the at least two gradient values.
Wherein the value of each element on the initial gradient matrix is a gradient value, and thus the gradient value of an element refers to the gradient value located on that element.
In an exemplary implementation, determining the image value of the element from the at least two gradient values may include:
taking the maximum value of the at least two gradient values as the value of the element; or,
taking the average value of the at least two gradient values as the value of the element; or,
taking a weighted average of the at least two gradient values as the value of the element.
In this embodiment, the values of the elements on the target gradient matrix are gradient values.
For example, assuming that the gradient values in the gradient matrices GradImage1, gradImage2, gradImage3, and GradImage4 are respectively grad values in GradImage1, gradImage2, gradImage3, and GradImage4, the pixel values GradImage (i, j) on the target gradient matrix GradImage can be obtained by the following formula (2):
GradImage(i,j)=max(GradImage1(i,j),GradImage2(i,j),GradImage3(i,j),GradImage4(i,j)) (2)
in formula (2), max () means taking the maximum value.
In an exemplary implementation, determining the weight matrix according to the target gradient matrix may include:
and according to a preset mapping relation between the gradient and the weight, converting each gradient value in the target gradient matrix into a corresponding weight value to obtain a weight matrix.
For example, the mapping relationship of the gradient and the weight can be expressed by the following formula (3):
In the formula (3), gradMin is the minimum gradient value input by the linear mapping region, gradMax is the maximum gradient value input by the linear mapping region, weight Min is the minimum weight value output by the linear mapping region, and weight Max is the maximum weight value output by the linear mapping region.
Through formula (3), gradient range [ GradMin, gradMax ] is mapped linearly to weight range [ WeightMax, weightMin ], gradients less than GradMin are mapped to weight Max, gradients greater than GradMax are mapped to weight Min, wherein 0.ltoreq.weight Min < weight Max.ltoreq.1.
And Weight (i, j) ∈0,1, the larger the gradient value, the smaller the Weight value.
In an exemplary implementation, obtaining the target image according to the first image, the second image, and the weight matrix may include:
for each pixel in a target image, acquiring a first pixel value corresponding to the pixel from the first image, acquiring a second pixel value corresponding to the pixel from the second image, and acquiring a target weight value corresponding to the pixel from the weight matrix;
determining a product of the first pixel value and the target weight value as a first component;
determining a product of the second pixel value and a difference value as a second component, the difference value being a difference of 1 and the target weight value;
The sum of the first component and the second component is determined as a pixel value of the pixel.
For example, assuming that the target image is ImageFinal, imageFinal (i, j) is the pixel value of the pixel (i, i) in ImageFinal, the pixel value can be calculated by the following formula (4):
ImageFinal(i,j)=Weight(i,j)*I high (i,j)+(1-Weight(i,j))*I Low (i,j) (4)
because the pixel point with a large gradient value has a small weight value, and the larger the gradient value is, the more serious the high gradient artifact is reflected, the high gradient artifact can be effectively weakened by reducing the weight of the pixel value with a large gradient in the first image and increasing the weight of the corresponding pixel value in the second image.
In this embodiment, the resolution of the finally obtained target image is greater than the resolution of the second image and slightly less than the resolution of the first image, and the target image does not include the high gradient artifact in the original first image.
According to the image processing method provided by the embodiment of the invention, a first image is obtained by reconstructing CT (computed tomography) scanning raw data based on a first convolution kernel, a second image is obtained by reconstructing CT scanning raw data based on a second convolution kernel, the first convolution kernel is a convolution kernel for generating a high-resolution image, the second convolution kernel is a convolution kernel for generating a low-resolution image, a weight matrix is determined according to the first image, and a target image is obtained according to the first image, the second image and the weight matrix. The high-resolution CT image and the low-resolution CT image are fused based on the weights according to the image gradient mapping weighting weights, so that the resolution is ensured, meanwhile, the high-gradient artifact is effectively weakened, and the image quality is improved.
Fig. 2 is a diagram showing a comparative example of an original high-resolution CT image and a CT image obtained by the image processing method according to the present embodiment. Referring to fig. 2, the left side is an original high resolution CT image, the right side is a CT image obtained by processing the original high resolution CT image by the image processing method according to the embodiment, as can be seen from fig. 2, high gradient artifacts in small boxes in the right side image are significantly weaker than those in the left side image, the image quality of the right side image is significantly better than that of the left side image, and the resolution of the right side image is not much different from that of the left side image.
Based on the method embodiment, the embodiment of the invention also provides a corresponding device, equipment and storage medium embodiment.
Fig. 3 is a functional block diagram of an image processing apparatus according to an embodiment of the present invention. As shown in fig. 3, in the present embodiment, the image processing apparatus may include:
a reconstruction module 310, configured to reconstruct CT scan raw data based on a first convolution kernel to obtain a first image, and reconstruct CT scan raw data based on a second convolution kernel to obtain a second image, where the first convolution kernel is a convolution kernel for generating a high-resolution image, and the second convolution kernel is a convolution kernel for generating a low-resolution image;
A determining module 320, configured to determine a weight matrix according to the first image;
and a fusion module 330, configured to obtain a target image according to the first image, the second image, and the weight matrix.
In one exemplary implementation, the determining module 320 may be specifically configured to:
acquiring a target gradient matrix corresponding to the first image according to the first image;
and determining a weight matrix according to the target gradient matrix.
In an exemplary implementation process, the determining module 320, when configured to obtain, according to the first image, a target gradient matrix corresponding to the first image, may be specifically configured to:
if the size of the imaging matrix of the first image is smaller than or equal to the size of the preset imaging matrix and the imaging field of view of the first image is larger than or equal to the preset imaging field of view, directly acquiring a gradient matrix of the first image to serve as a target gradient matrix.
In an exemplary implementation process, the determining module 320, when configured to obtain, according to the first image, a target gradient matrix corresponding to the first image, may be specifically configured to:
if the size of the imaging matrix of the first image is larger than the preset imaging matrix size or the imaging view of the first image is smaller than the preset imaging view, reducing the first image into an intermediate image;
Acquiring a gradient matrix corresponding to the intermediate image as an intermediate gradient matrix;
and amplifying the intermediate gradient matrix to obtain a target gradient matrix of the first image.
In an exemplary implementation, the determining module 320, when configured to reduce the first image to an intermediate image, may be specifically configured to:
determining a scaling factor according to the size of the imaging matrix of the first image, the size of the preset imaging matrix, the imaging view of the first image and the preset imaging view;
determining the size of an imaging matrix of the intermediate image according to the size of the imaging matrix of the first image and the scaling factor;
reducing the first image into an intermediate image according to the size of the imaging matrix of the first image and the size of the imaging matrix of the intermediate image;
the determining module 320, when configured to amplify the intermediate gradient matrix to obtain the target gradient matrix of the first image, may be specifically configured to:
and amplifying the intermediate gradient matrix into a target gradient matrix according to the size of the imaging matrix of the first image and the size of the imaging matrix of the intermediate image.
In an exemplary implementation, the process of acquiring the gradient matrix corresponding to the image may include: :
Acquiring gradient matrixes in at least two directions according to the image, and taking the gradient matrixes as initial gradient matrixes;
and determining a gradient matrix corresponding to the image according to the acquired at least two initial gradient matrices.
In an exemplary implementation process, determining a gradient matrix corresponding to the image according to the acquired at least two initial gradient matrices may include:
for each element on the gradient matrix corresponding to the image, acquiring a gradient value of the corresponding element on each initial gradient matrix to obtain at least two gradient values;
and determining the value of the element according to the at least two gradient values.
In an exemplary implementation, determining the value of the element from the at least two gradient values may include:
taking the maximum value of the at least two gradient values as the value of the element; or,
taking the average value of the at least two gradient values as the value of the element; or,
taking a weighted average of the at least two gradient values as the value of the element.
In one exemplary implementation, acquiring a gradient matrix in at least two directions from an image may include:
for each direction in the at least two directions, acquiring a mask matrix corresponding to the direction;
Convolving the image with the mask matrix to obtain a result matrix;
and taking an absolute value of each element in the result matrix to obtain a gradient matrix in the direction.
In an exemplary implementation, the determining module 320, when configured to determine the weight matrix according to the target gradient matrix, may be specifically configured to:
and according to a preset mapping relation between the gradient and the weight, converting each gradient value in the target gradient matrix into a corresponding weight value to obtain a weight matrix.
In one exemplary implementation, the fusion module 330 may be specifically configured to:
for each pixel in a target image, acquiring a first pixel value corresponding to the pixel from the first image, acquiring a second pixel value corresponding to the pixel from the second image, and acquiring a target weight value corresponding to the pixel from the weight matrix;
determining a product of the first pixel value and the target weight value as a first component;
determining a product of the second pixel value and a difference value as a second component, the difference value being a difference of 1 and the target weight value;
the sum of the first component and the second component is determined as a pixel value of the pixel.
The embodiment of the invention also provides CT equipment. Fig. 4 is a hardware configuration diagram of a CT apparatus according to an embodiment of the present invention. As shown in fig. 4, the CT apparatus includes: an internal bus 401, and a memory 402, a processor 403 and an external interface 404 connected by the internal bus, wherein the external interface is used for connecting a detector of the CT system, and the detector comprises a plurality of detector chambers and corresponding processing circuits;
the memory 402 is configured to store machine readable instructions corresponding to the image processing logic;
the processor 403 is configured to read the machine readable instructions on the memory 402 and execute the instructions to implement the following operations:
reconstructing CT scanning raw data based on a first convolution kernel to obtain a first image, and reconstructing CT scanning raw data based on a second convolution kernel to obtain a second image, wherein the first convolution kernel is a convolution kernel for generating a high-resolution image, and the second convolution kernel is a convolution kernel for generating a low-resolution image;
determining a weight matrix according to the first image;
and obtaining a target image according to the first image, the second image and the weight matrix.
In an exemplary implementation, determining a weight matrix from the first image includes:
Acquiring a target gradient matrix corresponding to the first image according to the first image;
and determining a weight matrix according to the target gradient matrix.
In an exemplary implementation process, according to the first image, acquiring a target gradient matrix corresponding to the first image includes:
if the size of the imaging matrix of the first image is smaller than or equal to the size of the preset imaging matrix and the imaging field of view of the first image is larger than or equal to the preset imaging field of view, directly acquiring a gradient matrix of the first image to serve as a target gradient matrix.
In an exemplary implementation process, according to the first image, acquiring a target gradient matrix corresponding to the first image includes:
if the size of the imaging matrix of the first image is larger than the preset imaging matrix size or the imaging view of the first image is smaller than the preset imaging view, reducing the first image into an intermediate image;
acquiring a gradient matrix corresponding to the intermediate image as an intermediate gradient matrix;
and amplifying the intermediate gradient matrix to obtain a target gradient matrix of the first image.
In one exemplary implementation, reducing the first image to an intermediate image includes:
Determining a scaling factor according to the size of the imaging matrix of the first image, the size of the preset imaging matrix, the imaging view of the first image and the preset imaging view;
determining the size of an imaging matrix of the intermediate image according to the size of the imaging matrix of the first image and the scaling factor;
reducing the first image into an intermediate image according to the size of the imaging matrix of the first image and the size of the imaging matrix of the intermediate image;
amplifying the intermediate gradient matrix to obtain a target gradient matrix of the first image, including:
and amplifying the intermediate gradient matrix into a target gradient matrix according to the size of the imaging matrix of the first image and the size of the imaging matrix of the intermediate image.
In an exemplary implementation, the process of acquiring the gradient matrix corresponding to the image includes:
according to the image or, acquiring gradient matrixes in at least two directions as initial gradient matrixes;
and determining a gradient matrix corresponding to the image according to the acquired at least two initial gradient matrices.
In an exemplary implementation process, determining a gradient matrix corresponding to the image according to the acquired at least two initial gradient matrices may include:
For each element on the gradient matrix corresponding to the image, acquiring a gradient value of a corresponding point on each initial gradient matrix to obtain at least two gradient values;
and determining the value of the element according to the at least two gradient values.
In one exemplary implementation, determining the value of the element from the at least two gradient values includes:
taking the maximum value of the at least two gradient values as the value of the element; or,
taking the average value of the at least two gradient values as the value of the element; or,
taking a weighted average of the at least two gradient values as the value of the element.
In one exemplary implementation, acquiring a gradient matrix in at least two directions from an image includes:
for each direction in the at least two directions, acquiring a mask matrix corresponding to the direction;
convolving the image with the mask matrix to obtain a result matrix;
and taking an absolute value of each element in the result matrix to obtain a gradient matrix in the direction.
In an exemplary implementation, determining a weight matrix from the target gradient matrix includes:
and according to a preset mapping relation between the gradient and the weight, converting each gradient value in the target gradient matrix into a corresponding weight value to obtain a weight matrix.
In an exemplary implementation, obtaining a target image according to the first image, the second image, and the weight matrix includes:
for each pixel in a target image, acquiring a first pixel value corresponding to the pixel from the first image, acquiring a second pixel value corresponding to the pixel from the second image, and acquiring a target weight value corresponding to the pixel from the weight matrix;
determining a product of the first pixel value and the target weight value as a first component;
determining a product of the second pixel value and a difference value as a second component, the difference value being a difference of 1 and the target weight value;
the sum of the first component and the second component is determined as a pixel value of the pixel.
The embodiment of the invention also provides a CT system, which comprises a detector, a scanning bed and CT equipment, wherein the detector comprises a plurality of detector chambers and corresponding processing circuits; wherein:
the detector chamber is used for detecting X-rays passing through a scanning object and converting the X-rays into electric signals in the scanning process of the CT system;
the processing circuit is used for converting the electric signal into a pulse signal and collecting energy information of the pulse signal;
The CT device is used for:
reconstructing CT scanning raw data based on a first convolution kernel to obtain a first image, and reconstructing CT scanning raw data based on a second convolution kernel to obtain a second image, wherein the first convolution kernel is a convolution kernel for generating a high-resolution image, and the second convolution kernel is a convolution kernel for generating a low-resolution image;
determining a weight matrix according to the first image;
and obtaining a target image according to the first image, the second image and the weight matrix.
In an exemplary implementation, determining a weight matrix from the first image includes:
acquiring a target gradient matrix corresponding to the first image according to the first image;
and determining a weight matrix according to the target gradient matrix.
In an exemplary implementation process, according to the first image, acquiring a target gradient matrix corresponding to the first image includes:
if the size of the imaging matrix of the first image is smaller than or equal to the size of the preset imaging matrix and the imaging field of view of the first image is larger than or equal to the preset imaging field of view, directly acquiring a gradient matrix of the first image to serve as a target gradient matrix.
In an exemplary implementation process, according to the first image, acquiring a target gradient matrix corresponding to the first image includes:
if the size of the imaging matrix of the first image is larger than the preset imaging matrix size or the imaging view of the first image is smaller than the preset imaging view, reducing the first image into an intermediate image;
acquiring a gradient matrix corresponding to the intermediate image as an intermediate gradient matrix;
and amplifying the intermediate gradient matrix to obtain a target gradient matrix of the first image.
In one exemplary implementation, reducing the first image to an intermediate image includes:
determining a scaling factor according to the size of the imaging matrix of the first image, the size of the preset imaging matrix, the imaging view of the first image and the preset imaging view;
determining the size of an imaging matrix of the intermediate image according to the size of the imaging matrix of the first image and the scaling factor;
reducing the first image into an intermediate image according to the size of the imaging matrix of the first image and the size of the imaging matrix of the intermediate image;
amplifying the intermediate gradient matrix to obtain a target gradient matrix of the first image, including:
And amplifying the intermediate gradient matrix into a target gradient matrix according to the size of the imaging matrix of the first image and the size of the imaging matrix of the intermediate image.
In an exemplary implementation, acquiring a gradient matrix corresponding to an image includes:
acquiring gradient matrixes in at least two directions according to the image, and taking the gradient matrixes as initial gradient matrixes;
and determining a gradient matrix corresponding to the image according to the acquired at least two initial gradient matrices.
In an exemplary implementation process, determining a gradient matrix corresponding to the image according to the acquired at least two initial gradient matrices includes:
for each element on the gradient matrix corresponding to the image, acquiring a gradient value of the corresponding element on each initial gradient matrix to obtain at least two gradient values;
and determining the value of the element according to the at least two gradient values.
In one exemplary implementation, determining the value of the element from the at least two gradient values includes:
taking the maximum value of the at least two gradient values as the value of the element; or,
taking the average value of the at least two gradient values as the value of the element; or,
taking a weighted average of the at least two gradient values as the value of the element.
In one exemplary implementation, acquiring a gradient matrix in at least two directions from an image includes:
for each direction in the at least two directions, acquiring a mask matrix corresponding to the direction;
convolving the image with the mask matrix to obtain a result matrix;
and taking an absolute value of each element in the result matrix to obtain a gradient matrix in the direction.
In an exemplary implementation, determining a weight matrix from the target gradient matrix includes:
and according to a preset mapping relation between the gradient and the weight, converting each gradient value in the target gradient matrix into a corresponding weight value to obtain a weight matrix.
In an exemplary implementation, obtaining a target image according to the first image, the second image, and the weight matrix includes:
for each pixel in a target image, acquiring a first pixel value corresponding to the pixel from the first image, acquiring a second pixel value corresponding to the pixel from the second image, and acquiring a target weight value corresponding to the pixel from the weight matrix;
determining a product of the first pixel value and the target weight value as a first component;
Determining the product of the second pixel value and a difference value as a second component, wherein the difference value is the difference between a preset value and the target weight value;
the sum of the first component and the second component is determined as a pixel value of the pixel.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, wherein the program when executed by a processor realizes the following operations:
reconstructing CT scanning raw data based on a first convolution kernel to obtain a first image, and reconstructing CT scanning raw data based on a second convolution kernel to obtain a second image, wherein the first convolution kernel is a convolution kernel for generating a high-resolution image, and the second convolution kernel is a convolution kernel for generating a low-resolution image;
determining a weight matrix according to the first image;
and obtaining a target image according to the first image, the second image and the weight matrix.
In an exemplary implementation, determining a weight matrix from the first image includes:
acquiring a target gradient matrix corresponding to the first image according to the first image;
and determining a weight matrix according to the target gradient matrix.
In an exemplary implementation process, according to the first image, acquiring a target gradient matrix corresponding to the first image includes:
If the size of the imaging matrix of the first image is smaller than or equal to the size of the preset imaging matrix and the imaging field of view of the first image is larger than or equal to the preset imaging field of view, directly acquiring a gradient matrix of the first image to serve as a target gradient matrix.
In an exemplary implementation process, according to the first image, acquiring a target gradient matrix corresponding to the first image includes:
if the size of the imaging matrix of the first image is larger than the preset imaging matrix size or the imaging view of the first image is smaller than the preset imaging view, reducing the first image into an intermediate image;
acquiring a gradient matrix corresponding to the intermediate image as an intermediate gradient matrix;
and amplifying the intermediate gradient matrix to obtain a target gradient matrix of the first image.
In one exemplary implementation, reducing the first image to an intermediate image includes:
determining a scaling factor according to the size of the imaging matrix of the first image, the size of the preset imaging matrix, the imaging view of the first image and the preset imaging view;
determining the size of an imaging matrix of the intermediate image according to the size of the imaging matrix of the first image and the scaling factor;
Reducing the first image into an intermediate image according to the size of the imaging matrix of the first image and the size of the imaging matrix of the intermediate image;
amplifying the intermediate gradient matrix to obtain a target gradient matrix of the first image, including:
and amplifying the intermediate gradient matrix into a target gradient matrix according to the size of the imaging matrix of the first image and the size of the imaging matrix of the intermediate image.
In an exemplary implementation, acquiring a gradient matrix corresponding to an image includes:
acquiring gradient matrixes in at least two directions according to the image, and taking the gradient matrixes as initial gradient matrixes;
and determining a gradient matrix corresponding to the image according to the acquired at least two initial gradient matrices.
In an exemplary implementation process, determining a gradient matrix corresponding to the image according to the acquired at least two initial gradient matrices includes:
for each element on the gradient matrix corresponding to the image, acquiring a gradient value of the corresponding element on each initial gradient matrix to obtain at least two gradient values;
and determining the value of the element according to the at least two gradient values.
In one exemplary implementation, determining the value of the element from the at least two gradient values includes:
Taking the maximum value of the at least two gradient values as the value of the element; or,
taking the average value of the at least two gradient values as the value of the element; or,
taking a weighted average of the at least two gradient values as the value of the element.
In one exemplary implementation, acquiring a gradient matrix in at least two directions from an image includes:
for each direction in the at least two directions, acquiring a mask matrix corresponding to the direction;
convolving the image with the mask matrix to obtain a result matrix;
and taking an absolute value of each element in the result matrix to obtain a gradient matrix in the direction.
In an exemplary implementation, determining a weight matrix from the target gradient matrix includes:
and according to a preset mapping relation between the gradient and the weight, converting each gradient value in the target gradient matrix into a corresponding weight value to obtain a weight matrix.
In an exemplary implementation, obtaining a target image according to the first image, the second image, and the weight matrix includes:
for each pixel in a target image, acquiring a first pixel value corresponding to the pixel from the first image, acquiring a second pixel value corresponding to the pixel from the second image, and acquiring a target weight value corresponding to the pixel from the weight matrix;
Determining a product of the first pixel value and the target weight value as a first component;
determining the product of the second pixel value and a difference value as a second component, wherein the difference value is the difference between a preset value and the target weight value;
the sum of the first component and the second component is determined as a pixel value of the pixel.
For the device and apparatus embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present description. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It is to be understood that the present description is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The foregoing description of the preferred embodiments is provided for the purpose of illustration only, and is not intended to limit the scope of the disclosure, since any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the disclosure are intended to be included within the scope of the disclosure.

Claims (10)

1. An image processing method, comprising:
reconstructing CT scanning raw data based on a first convolution kernel to obtain a first image, and reconstructing CT scanning raw data based on a second convolution kernel to obtain a second image, wherein the first convolution kernel is a convolution kernel for generating a high-resolution image, and the second convolution kernel is a convolution kernel for generating a low-resolution image;
Determining a weight matrix according to the first image;
obtaining a target image according to the first image, the second image and the weight matrix;
determining a weight matrix from the first image, comprising:
acquiring a target gradient matrix corresponding to the first image according to the first image;
determining a weight matrix according to the target gradient matrix;
according to the first image, acquiring a target gradient matrix corresponding to the first image, including:
if the size of the imaging matrix of the first image is smaller than or equal to the size of a preset imaging matrix and the imaging field of view of the first image is larger than or equal to the preset imaging field of view, directly acquiring a gradient matrix of the first image to serve as a target gradient matrix;
or (b)
If the size of the imaging matrix of the first image is larger than the preset imaging matrix size or the imaging view of the first image is smaller than the preset imaging view, reducing the first image into an intermediate image;
acquiring a gradient matrix corresponding to the intermediate image as an intermediate gradient matrix;
amplifying the intermediate gradient matrix to obtain a target gradient matrix of the first image;
obtaining a target image according to the first image, the second image and the weight matrix, wherein the target image comprises:
For each pixel in a target image, acquiring a first pixel value corresponding to the pixel from the first image, acquiring a second pixel value corresponding to the pixel from the second image, and acquiring a target weight value corresponding to the pixel from the weight matrix;
determining a product of the first pixel value and the target weight value as a first component;
determining a product of the second pixel value and a difference value as a second component, the difference value being a difference of 1 and the target weight value;
the sum of the first component and the second component is determined as a pixel value of the pixel.
2. The method of claim 1, wherein reducing the first image to an intermediate image comprises:
determining a scaling factor according to the size of the imaging matrix of the first image, the size of the preset imaging matrix, the imaging view of the first image and the preset imaging view;
determining the size of an imaging matrix of the intermediate image according to the size of the imaging matrix of the first image and the scaling factor;
reducing the first image into an intermediate image according to the size of the imaging matrix of the first image and the size of the imaging matrix of the intermediate image;
Amplifying the intermediate gradient matrix to obtain a target gradient matrix of the first image, including:
and amplifying the intermediate gradient matrix into a target gradient matrix according to the size of the imaging matrix of the first image and the size of the imaging matrix of the intermediate image.
3. The method of claim 1, wherein the process of obtaining the gradient matrix corresponding to the image comprises:
acquiring gradient matrixes in at least two directions according to the image, and taking the gradient matrixes as initial gradient matrixes;
and determining a gradient matrix corresponding to the image according to the acquired at least two initial gradient matrices.
4. A method according to claim 3, wherein determining the gradient matrix corresponding to the image from the acquired at least two initial gradient matrices comprises:
for each element on the gradient matrix corresponding to the image, acquiring a gradient value of the corresponding element on each initial gradient matrix to obtain at least two gradient values;
and determining the value of the element according to the at least two gradient values.
5. The method of claim 4, wherein determining the value of the element from the at least two gradient values comprises:
taking the maximum value of the at least two gradient values as the value of the element; or,
Taking the average value of the at least two gradient values as the value of the element; or,
taking a weighted average of the at least two gradient values as the value of the element.
6. A method according to claim 3, wherein acquiring gradient matrices in at least two directions from the image comprises:
for each direction in the at least two directions, acquiring a mask matrix corresponding to the direction;
convolving the image with the mask matrix to obtain a result matrix;
and taking an absolute value of each element in the result matrix to obtain a gradient matrix in the direction.
7. The method of claim 1, wherein determining a weight matrix from the target gradient matrix comprises:
and according to a preset mapping relation between the gradient and the weight, converting each gradient value in the target gradient matrix into a corresponding weight value to obtain a weight matrix.
8. An image processing apparatus, comprising:
the reconstruction module is used for reconstructing CT scanning raw data based on a first convolution kernel to obtain a first image, reconstructing CT scanning raw data based on a second convolution kernel to obtain a second image, wherein the first convolution kernel is a convolution kernel for generating a high-resolution image, and the second convolution kernel is a convolution kernel for generating a low-resolution image;
A determining module, configured to determine a weight matrix according to the first image, where the determining module is configured to: acquiring a target gradient matrix corresponding to the first image according to the first image; determining a weight matrix according to the target gradient matrix; according to the first image, acquiring a target gradient matrix corresponding to the first image, including:
if the size of the imaging matrix of the first image is smaller than or equal to the size of a preset imaging matrix and the imaging field of view of the first image is larger than or equal to the preset imaging field of view, directly acquiring a gradient matrix of the first image to serve as a target gradient matrix;
or (b)
If the size of the imaging matrix of the first image is larger than the preset imaging matrix size or the imaging view of the first image is smaller than the preset imaging view, reducing the first image into an intermediate image; acquiring a gradient matrix corresponding to the intermediate image as an intermediate gradient matrix; amplifying the intermediate gradient matrix to obtain a target gradient matrix of the first image;
the fusion module is used for obtaining a target image according to the first image, the second image and the weight matrix; the fusion module is used for: for each pixel in a target image, acquiring a first pixel value corresponding to the pixel from the first image, acquiring a second pixel value corresponding to the pixel from the second image, and acquiring a target weight value corresponding to the pixel from the weight matrix; determining a product of the first pixel value and the target weight value as a first component; determining a product of the second pixel value and a difference value as a second component, the difference value being a difference of 1 and the target weight value; the sum of the first component and the second component is determined as a pixel value of the pixel.
9. A CT apparatus, comprising: an internal bus, and a memory, a processor and an external interface connected through the internal bus; the external interface is used for being connected with a detector of the CT system, and the detector comprises a plurality of detector chambers and corresponding processing circuits;
the memory is used for storing machine-readable instructions corresponding to the image processing control logic;
the processor is configured to read the machine-readable instructions on the memory and perform operations comprising:
reconstructing CT scanning raw data based on a first convolution kernel to obtain a first image, and reconstructing CT scanning raw data based on a second convolution kernel to obtain a second image, wherein the first convolution kernel is a convolution kernel for generating a high-resolution image, and the second convolution kernel is a convolution kernel for generating a low-resolution image;
determining a weight matrix according to the first image;
obtaining a target image according to the first image, the second image and the weight matrix;
determining a weight matrix from the first image, comprising:
acquiring a target gradient matrix corresponding to the first image according to the first image;
determining a weight matrix according to the target gradient matrix;
According to the first image, acquiring a target gradient matrix corresponding to the first image, including:
if the size of the imaging matrix of the first image is smaller than or equal to the size of a preset imaging matrix and the imaging field of view of the first image is larger than or equal to the preset imaging field of view, directly acquiring a gradient matrix of the first image to serve as a target gradient matrix;
or (b)
If the size of the imaging matrix of the first image is larger than the preset imaging matrix size or the imaging view of the first image is smaller than the preset imaging view, reducing the first image into an intermediate image;
acquiring a gradient matrix corresponding to the intermediate image as an intermediate gradient matrix;
amplifying the intermediate gradient matrix to obtain a target gradient matrix of the first image;
obtaining a target image according to the first image, the second image and the weight matrix, wherein the target image comprises:
for each pixel in a target image, acquiring a first pixel value corresponding to the pixel from the first image, acquiring a second pixel value corresponding to the pixel from the second image, and acquiring a target weight value corresponding to the pixel from the weight matrix;
Determining a product of the first pixel value and the target weight value as a first component;
determining a product of the second pixel value and a difference value as a second component, the difference value being a difference of 1 and the target weight value;
the sum of the first component and the second component is determined as a pixel value of the pixel.
10. A CT system comprising a detector, a scan bed and a CT apparatus, the detector comprising a plurality of detector cells and corresponding processing circuitry; wherein:
the detector chamber is used for detecting X-rays passing through a scanning object and converting the X-rays into electric signals in the scanning process of the CT system;
the processing circuit is used for converting the electric signal into a pulse signal and collecting energy information of the pulse signal;
the CT device is used for:
reconstructing CT scanning raw data based on a first convolution kernel to obtain a first image, and reconstructing CT scanning raw data based on a second convolution kernel to obtain a second image, wherein the first convolution kernel is a convolution kernel for generating a high-resolution image, and the second convolution kernel is a convolution kernel for generating a low-resolution image;
determining a weight matrix according to the first image;
obtaining a target image according to the first image, the second image and the weight matrix;
Determining a weight matrix from the first image, comprising:
acquiring a target gradient matrix corresponding to the first image according to the first image;
determining a weight matrix according to the target gradient matrix;
according to the first image, acquiring a target gradient matrix corresponding to the first image, including:
if the size of the imaging matrix of the first image is smaller than or equal to the size of a preset imaging matrix and the imaging field of view of the first image is larger than or equal to the preset imaging field of view, directly acquiring a gradient matrix of the first image to serve as a target gradient matrix;
or (b)
If the size of the imaging matrix of the first image is larger than the preset imaging matrix size or the imaging view of the first image is smaller than the preset imaging view, reducing the first image into an intermediate image;
acquiring a gradient matrix corresponding to the intermediate image as an intermediate gradient matrix;
amplifying the intermediate gradient matrix to obtain a target gradient matrix of the first image;
obtaining a target image according to the first image, the second image and the weight matrix, wherein the target image comprises:
for each pixel in a target image, acquiring a first pixel value corresponding to the pixel from the first image, acquiring a second pixel value corresponding to the pixel from the second image, and acquiring a target weight value corresponding to the pixel from the weight matrix;
Determining a product of the first pixel value and the target weight value as a first component;
determining a product of the second pixel value and a difference value as a second component, the difference value being a difference of 1 and the target weight value;
the sum of the first component and the second component is determined as a pixel value of the pixel.
CN202010409414.5A 2020-05-14 2020-05-14 Image processing method, device, CT equipment and CT system Active CN111462273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010409414.5A CN111462273B (en) 2020-05-14 2020-05-14 Image processing method, device, CT equipment and CT system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010409414.5A CN111462273B (en) 2020-05-14 2020-05-14 Image processing method, device, CT equipment and CT system

Publications (2)

Publication Number Publication Date
CN111462273A CN111462273A (en) 2020-07-28
CN111462273B true CN111462273B (en) 2024-03-08

Family

ID=71685506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010409414.5A Active CN111462273B (en) 2020-05-14 2020-05-14 Image processing method, device, CT equipment and CT system

Country Status (1)

Country Link
CN (1) CN111462273B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184850B (en) * 2020-09-30 2024-01-05 沈阳先进医疗设备技术孵化中心有限公司 Image processing method, device, console device and CT system
CN113077375A (en) * 2021-04-07 2021-07-06 有方(合肥)医疗科技有限公司 Image acquisition method, image acquisition device, electronic device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651753A (en) * 2016-09-28 2017-05-10 沈阳东软医疗系统有限公司 Method and device for improving CT image displaying effect
CN109409503A (en) * 2018-09-27 2019-03-01 深圳市铱硙医疗科技有限公司 Training method, image conversion method, device, equipment and the medium of neural network
CN109919838A (en) * 2019-01-17 2019-06-21 华南理工大学 The ultrasound image super resolution ratio reconstruction method of contour sharpness is promoted based on attention mechanism

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009047643A2 (en) * 2007-04-23 2009-04-16 Comagna Kft. Mehtod and apparatus for image processing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651753A (en) * 2016-09-28 2017-05-10 沈阳东软医疗系统有限公司 Method and device for improving CT image displaying effect
CN109409503A (en) * 2018-09-27 2019-03-01 深圳市铱硙医疗科技有限公司 Training method, image conversion method, device, equipment and the medium of neural network
CN109919838A (en) * 2019-01-17 2019-06-21 华南理工大学 The ultrasound image super resolution ratio reconstruction method of contour sharpness is promoted based on attention mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
邢晓羊 ; 魏敏 ; 符颖 ; .基于特征损失的医学图像超分辨率重建.计算机工程与应用.2018,(20),全文. *
郑远攀 ; 李广阳 ; 李晔 ; .深度学习在图像识别中的应用研究综述.计算机工程与应用.2019,(12),全文. *

Also Published As

Publication number Publication date
CN111462273A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
EP2453406B1 (en) Ultrasonic image processing apparatus
WO2019147767A1 (en) 3-d convolutional autoencoder for low-dose ct via transfer learning from a 2-d trained network
EP1636756A2 (en) System and method for adaptive medical image registration
EP2453405B1 (en) Ultrasonic image processing apparatus
JP2004503030A (en) Method and apparatus for digital image defect correction and noise filtering
CN111462273B (en) Image processing method, device, CT equipment and CT system
CN111815735B (en) Human tissue self-adaptive CT reconstruction method and reconstruction system
CN102024251A (en) System and method for multi-image based virtual non-contrast image enhancement for dual source CT
CN113793272B (en) Image noise reduction method and device, storage medium and terminal
US20160300370A1 (en) Tomography apparatus and method of reconstructing tomography image by using the tomography apparatus
CN111798535B (en) CT image enhancement display method and computer readable storage medium
KR101467380B1 (en) Method and Apparatus for improving quality of medical image
CN115578263B (en) CT super-resolution reconstruction method, system and device based on generation network
CN111369465B (en) CT dynamic image enhancement method and device
CN110490857B (en) Image processing method, image processing device, electronic equipment and storage medium
US20060066911A1 (en) Edge detection and correcting system and method
CN111311531B (en) Image enhancement method, device, console device and medical imaging system
CN110473297B (en) Image processing method, image processing device, electronic equipment and storage medium
JP2019205073A (en) Image processing device, image processing method, image processing program, and storage medium
CN111127581A (en) Image reconstruction method and device, CT (computed tomography) equipment and CT system
KR102492949B1 (en) Processing apparatus and method for medical image
CN111275635B (en) Image processing method and device
KR102480389B1 (en) Method and apparatus for bone suppression in X-ray Image
CN112184850B (en) Image processing method, device, console device and CT system
CN101390753B (en) CT thin-layer ultrahigh resolution image density conversion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240202

Address after: 110167 No. 177-1 Innovation Road, Hunnan District, Shenyang City, Liaoning Province

Applicant after: Shenyang Neusoft Medical Systems Co.,Ltd.

Country or region after: China

Address before: Room 336, 177-1, Chuangxin Road, Hunnan New District, Shenyang City, Liaoning Province

Applicant before: Shenyang advanced medical equipment Technology Incubation Center Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant