CN108257135A - The assistant diagnosis system of medical image features is understood based on deep learning method - Google Patents

The assistant diagnosis system of medical image features is understood based on deep learning method Download PDF

Info

Publication number
CN108257135A
CN108257135A CN201810103893.0A CN201810103893A CN108257135A CN 108257135 A CN108257135 A CN 108257135A CN 201810103893 A CN201810103893 A CN 201810103893A CN 108257135 A CN108257135 A CN 108257135A
Authority
CN
China
Prior art keywords
layer
refer
layers
output
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810103893.0A
Other languages
Chinese (zh)
Inventor
胡海蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang De Image Solutions Co ltd
Original Assignee
Zhejiang De Image Solutions Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang De Image Solutions Co ltd filed Critical Zhejiang De Image Solutions Co ltd
Priority to CN201810103893.0A priority Critical patent/CN108257135A/en
Publication of CN108257135A publication Critical patent/CN108257135A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to complementary medicine diagnostic fields, it is desirable to provide the assistant diagnosis system of medical image features is understood based on deep learning method.The assistant diagnosis system that medical image features are understood based on deep learning method includes process:It reads the medical image data of lesion and is pre-processed;Image is chosen, establishes convolutional neural networks framework, automatic study is partitioned into focal area, and lesion shape is refined;The CNN models for building a convolutional neural networks framework again understand good pernicious focus characteristic automatically, and the assistant diagnosis system that medical image feature is understood based on deep learning method is obtained after training.The present invention can not be partitioned into focal area automatically only by depth convolutional neural networks, the deficiency of weak boundary cannot be solved the problems, such as based on active contour etc. by compensating for, and can learn to extract valuable feature combination automatically, avoid the complexity of artificial selected characteristic.

Description

The assistant diagnosis system of medical image features is understood based on deep learning method
Technical field
The present invention relates to complementary medicine diagnostic fields, more particularly to understand medical image features based on deep learning method Assistant diagnosis system.
Background technology
In recent years, with the rapid development of computer technology and digital image processing techniques, digital image processing techniques are got over Come it is more be applied to complementary medicine diagnostic field, principle is exactly to being divided by the medical image that different modes obtain It cuts, reconstructs, be registrated, the image processing techniques such as identification, so as to obtain valuable medical diagnostic information, main purpose is to make doctor Raw observation diseased region is more directly and clear, and auxiliary reference is provided for doctor's clinical definite, and there is very important reality to anticipate Justice.
Based on medical image, find lesion to differentiating that its good pernicious, clinical treatment and surgical selection are significant early. And based on the ultrasonic examination of ultrasonic imaging technique because can real time imagery, inspection fee it is relatively low, to sufferer hurtless measure etc.. It is widely used in clinical diagnosis.And the good pernicious master of diagnosis lesion (such as thyroid nodule, Breast Nodules, lymph node etc.) By puncturing living tissue cells inspection, such workload can be very big, is additionally present of situation about excessively detecting, and doctor understands Features of ultrasound pattern subjectivity is stronger, and mainly by experience, result suffers from the imaging mechanism of medical imaging devices, obtains Condition shows the influences of factors such as equipment and easily causes mistaken diagnosis or fail to pinpoint a disease in diagnosis.Therefore, using computer technology, at digital picture Reason technology, statistical method etc. realize that ultrasonoscopy auxiliary diagnosis is very necessary.But intrinsic image-forming mechanism causes clinical acquisitions The ultrasonograph quality arrived is poor, and the accuracy and automation for leading to auxiliary diagnosis are affected, so current segmentation Lesion in ultrasonoscopy it is most be the semi-automatic segmentation based on active contour, classification mainly manually selects feature, so The Classification and Identifications such as traditional machine learning method support vector machines (SVM), K- neighbours (KNN), decision tree, these classification are utilized afterwards Device can only can have preferable effect to Small Sample Database.But almost without real understanding medical image, such auxiliary system System is undoubtedly a flight data recorder for final user.And medical data is magnanimity, the Classification and Identification of large sample, Especially the deciphering of characteristics of image can just have better booster action to medical diagnosis.
Invention content
It is a primary object of the present invention to overcome deficiency of the prior art, provide a kind of based on the deciphering of deep learning method The assistant diagnosis system of medical image features.In order to solve the above technical problems, the solution of the present invention is:
The assistant diagnosis system that medical image features are understood based on deep learning method is provided, including following processes:
First, the medical image data of lesion is read:
The medical image (can be picture format or the dicom pictures of standard) of lesion is read, including at least The image of 10000 benign lesions and the image of at least 10000 pernicious lesions;
2nd, medical image is pre-processed:
The lesion image that process one is read first is carried out image gray processing, and is removed using the gray value of surrounding pixel point Doctor is to measure the label that tubercle correlative is done in ultrasonoscopy, recycles gaussian filtering denoising, finally utilizes grey level histogram Equalization enhancing contrast, obtains pretreated enhancing image;
3rd, image is chosen, establishes first convolutional neural networks framework, i.e. CNN (convolutional neural Network), automatic study is partitioned into focal area, referred to as area-of-interest, i.e. ROI (region of interest), and right Lesion shape is refined;Specifically include following step:
1st step:The pretreated enhancing image 20000 of selection process two is opened, the image each 10000 including good pernicious lesion ;
2nd step:To each pictures, area-of-interest, i.e. focal area are sketched out manually (by expert) first;Then lead to It crosses first CNN framework and trains automatic parted pattern, it is SegCNN models to remember this automatic parted pattern;
The network structure that the SegCNN models are made of 15 layers of convolutional layer, 4 layers of down-sampling layer;The convolution of each convolutional layer The size of core is respectively:First layer is 13 × 13, and the second layer is 11 × 11 with third layer, and the 4th layer is 5 × 5, remaining each layer is 3 ×3;The step-length of convolutional layer is respectively:First convolutional layer is 2, remaining is all 1;The size of down-sampling layer is all 3 × 3, step Length is all 2;
3rd step:It is applied to all lesion images using the SegCNN models that the 2nd step obtains, i.e., the 1st step is chosen 20000 images are divided automatically, then establish a figure and cut model, and the focal area that SegCNN models obtain is carried out certainly Dynamic refinement segmentation, finally obtains ROI, i.e., all good pernicious lesions;
4th, the CNN models for establishing second convolutional neural networks framework understand good pernicious focus characteristic automatically, remember this CNN models are RecCNN models;
The network structure that the RecCNN models are made of 6 layers of convolutional layer, 4 layers of down-sampling layer, 3 layers of full articulamentum;3 The neuron node number of full articulamentum is respectively 4096,4096,1;The size of the convolution kernel of each convolutional layer is respectively:First layer is 13 × 13, the second layer is 11 × 11 with third layer, and the 4th layer is 5 × 5, remaining each layer is 3 × 3;The step-length of convolutional layer is respectively: First convolutional layer is 2, remaining is all 1;The size of down-sampling layer is all 3 × 3, and step-length is all 2;
The ROI that three SegCNN models of process are partitioned into automatically is divided into p groups, for training RecCNN models (i.e. The training RecCNN models of journey five:Feature is extracted, and normalizing is carried out to characteristic using ROI of the RecCNN models to each group Change, i.e., using the feature of each group of ROI of RecCNN model extractions, then these features to extracting carry out linear transformation, make it End value is mapped to [0,1]);The p is the positive integer not less than 2;
5th, p-1 groups data in process four are selected and make training set, training set is used to train RecCNN models, one group of residue Data make test set, and test set is used to test trained RecCNN models;
RecCNN models are trained using training set, for understanding medical image features, can automatically be split to all Focal area extraction feature (specific training process is:It extracts the method for feature divides automatically with three SegCNN models of process The method of middle extraction characteristic procedure is the same, i.e., is all to extract feature by respective each convolutional layer and pond layer, this two The effect of class functional layer is the same, and calculation formula and update method are the same, but the object of RecCNN models is For focal area, and automatic partitioning portion is to be carried out at the same time extraction feature with focal area for non-focal area, and RecCNN models and the convolutional layer of SegCNN models and pond layer window size, step sizes, filling size setting is different, so Mutual convolutional layer is different from pond layer sphere of action);
Then polytypic grader can be carried out by constructing one using Softmax, and the feature extracted is analyzed, This process is to solve the optimal value of a loss function, that is, optimizes loss function
Wherein, the i refers to i-th of sample;The j refers to jth class;The l refers to l classes;The m represents shared m A sample, m value ranges are arbitrary positive integer;The c represents that these samples can be divided into c classes in total, and c value ranges is arbitrarily just Integer;It is describedIt is a matrix, is the parameter corresponding to a classification, i.e. weight and biasing per a line;The θT jIt is Refer to the transposition of the parameter vector of jth class, the θT lRefer to the transposition of the parameter vector of l classes, the θijRefer to parameter matrix The element of i-th row jth;1 { } is an indicative function, i.e., when the value in braces is true, the result of the function is 1, otherwise as a result 0;The λ is the parameter for balancing fidelity item (first item) and regular terms (Section 2), and λ takes positive number here (its size is adjusted according to experimental result);The e represents Euler's numbers 2.718281828, exIt is exactly exponential function;The T is table Show the transposition operator in matrix calculating;Log represents natural logrithm, i.e., using Euler's numbers as the logarithm at bottom;x(i)It is input vector I-th dimension;y(i)It is the i-th dimension of each sample label;
The classification number c of Softmax graders is equal to 5, i.e., the echo characteristics, edge feature, structure for representing lesion respectively are special Sign, calcification feature, five category feature of aspect ratio features;Each class has different subclasses, and echo characteristics has four subclasses:High echo, Equal echo, low echo or extremely low echo, echoless;Edge feature has two subclasses:Finishing, not finishing;Structure feature has four sons Class:Based on reality, reality, based on capsule, capsule;Calcification feature has two subclasses:Microcalcification, without Microcalcification;Aspect ratio features have Two subclasses:More than 1, less than or equal to 1;Then, which the feature vector exported by stochastic gradient descent method is belonging respectively to (detailed process is similar with Forecasting Methodology in cutting procedure automatic in process three to the probability of the subclass of category feature, is all optimization One loss function, only here be a polytypic Softmax function, which is subordinate to according to output feature vector As soon as the probabilistic forecasting of category feature goes out a tag along sort, also the feature of a lesion is classified, can further be obtained The type of the corresponding each feature of good pernicious lesion);
6th, repetitive process five, do p crosscheck, i.e., the p group data divided for process four select one group not every time Same data make test set, and remaining p-1 groups data make training set, until each group of data all made test set;
By p crosscheck, the weight and offset parameter of convolutional neural networks model RecCNN can be preserved every time, and According to the accuracy rate assessment result on test set;The calculation formula of accuracy rate isWherein AC represents accuracy rate, The correct sample number of TN presentation classes, the sample number of FN presentation class mistakes;Finally take the highest primary crosscheck of accuracy rate In weight and offset parameter, as the optimal parameter of RecCNN models, obtained trained RecCNN models, i.e., it is final really The assistant diagnosis system based on deep learning method deciphering medical image feature is determined;
The lesion image for needing to understand is input to this auxiliary based on deep learning method deciphering medical image feature Diagnostic system, you can obtain the feature of the lesion, and analyzed, and then can be diagnosed according to these features per category feature it Good pernicious lesion.
In the present invention, the 2nd step and the 3rd step in the process three, specially:
(1) by the convolutional layer of CNN and the automatic learning characteristic of down-sampling layer, and feature is extracted, the specific steps are:
Step A:In a convolutional layer, the feature maps of last layer carries out convolution by a convolution kernel that can learn, so As soon as output feature map can be obtained by an activation primitive afterwards;Each output is one input of convolution nuclear convolution or combination The value of multiple convolution inputs (what we selected here is the value for combining the multiple maps that come in and go out of convolution):
Wherein, symbol * represents convolution operator;The l represents the number of plies;The i represents l-1 layers of i-th of neuron section Point;The j represents l layers of j-th of neuron node;The MjRepresent the set of the input maps of selection;It is describedRefer to L-1 layers of output, it is described as l layers of inputRefer to j-th of component of l layers of output;The f is activation primitive, this In take sigmoid functionsAs activation primitive, e represents Euler's numbers 2.718281828, exIt is exactly exponential function; The k is convolution operator, the kl ijRefer to the element of (i, j) position of l layers of convolution kernel;The b is biasing, describedIt is Refer to j-th of component of l layers of biasing;Each output map can give an additional biasing b, but specific defeated for one Go out map, the convolution kernel that convolution each inputs maps is different;
Calculated by gradient, to update sensitivity, sensitivity for how much representing b variations, error can change how much:
Wherein, the l represents the number of plies;The j represents l layers of j-th of neuron node;It is describedRepresent each element phase Multiply;The δ represents the sensitivity of output neuron, that is, biases the change rate of b, describedRefer to j-th point of l layers of sensitivity Amount, the δl+1 jRefer to j-th of component of l+1 layers of sensitivity;The sl=Wlxl-1+bl, xl-1Refer to l-1 layers of output, W For weight, b is biasing, describedRefer to l layers of sl=Wlxl-1+blJ-th of component, the WlRefer to l layers of weight ginseng Number, the blRefer to l layers of biasing;The f is activation primitive, takes sigmoid functions hereAs activation Function, e represent Euler's numbers 2.718281828, exIt is exactly exponential function;F " (x) is the derived function of f (x) (i.e. if f takes Sigmoid functionsThen f'(x)=(1-f (x)) f (x));It is describedRepresent the weights that each layer is shared;It is described Up () represents that (if the decimation factor of down-sampling is n, up-sampling operation is exactly by each pixel for a up-sampling operation Both horizontally and vertically upper copy n times, can thus restore original size);
Then it sums to all nodes in the sensitivity map in l layers, the quick gradient for calculating biasing b:
Wherein, the l represents the number of plies;The j represents l layers of j-th of neuron node;The b represents biasing, described Refer to j-th of component of l layers of biasing;The δ represents the sensitivity of output neuron, that is, biases the change rate of b;The u, V represents (u, v) position of output maps;It is describedU, v refer to the element of l layers of sensitivity (u, v) position;The E is to miss Difference function, here(if the problem of two classification, then label can for the dimension of the C expressions label To be denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ { (0,1), (1,0) }, at this time C=2);It is describedIt represents n-th The h dimensions of sample corresponding label;It is describedRepresent h-th of output of the corresponding network output of n-th of sample;
Finally using Back Propagation Algorithm, stochastic gradient descent is carried out to loss function, calculates the weights of convolution kernel:
Wherein, the W is weight parameter, and the △ W refer to the knots modification of weight parameter;The WlRefer to l layers of power Weight parameter;The E is error function, andThe C represents the dimension of label (if two classification are asked Topic, then label can be denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ { (0,1), (1,0) }, at this time C=2);Institute It statesRepresent the h dimensions of n-th of sample corresponding label;It is describedRepresent h-th of the output of n-th of sample corresponding network it is defeated Go out;The η is learning rate, i.e. step-length;Since the weights much connected are shared, for a given weights, need Gradient is asked to the point with the weights associated connection to all, then sum to these gradients:
Wherein, the l represents the number of plies;The i represents l layers of i-th of neuron node;The j represents j-th of l layers Neuron node;B represents biasing, and the δ represents the sensitivity of output neuron, that is, biases the change rate of b;The u, v are represented Export (u, v) position of maps;It is describedU, v refer to the element of (u, v) position of l layers of sensitivity;The E is error letter Number, here(if the problem of two classification, then label can be remembered the dimension of the C expressions label For yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ { (0,1), (1,0) }, at this time C=2);It is describedRepresent n-th of sample The h dimensions of corresponding label;It is describedRepresent h-th of output of the corresponding network output of n-th of sample;It is describedIt is convolution kernel; It is describedIt isIn element when convolution withBy the patch of element multiplication, i.e., all and convolution kernel size All region units in identical picture, the value of (u, v) position of output convolution map is by last layer (u, v) position Patch and convolution kernelBy the result of element multiplication;
Step B:Down-sampling layer has N number of input maps, just there is N number of output maps, and only each output map becomes smaller, Then have:
Wherein, it is describedRefer to j-th of component of l layers of output, the Xl-1 jRefer to the jth of l-1 layers of output A component;The f is activation primitive, takes sigmoid functions hereAs activation primitive, e represents Euler's numbers 2.718281828 exIt is exactly exponential function;It is describedRepresent the weights that each layer is shared;The down () represents a down-sampling Function;It is describedRefer to j-th of component of l layers of biasing;The all pixels of the block of different n × n of input picture are asked With export image in this way and all reduce n times on two dimensions, n value ranges are positive integer (here exactly by input picture Each element takes the block of fixed 3 × 3 sizes, and then wherein all elements are summed as the element in the output image Value, so that output image all reduces 3 times on two dimensions);Each output map corresponds to an one's own power Weight parameter beta (biasing of multiplying property) and an additivity biasing b;
By gradient descent method come undated parameter β and b:
Wherein, the f " (x) refers to the derivative of activation primitive f (x);It is describedRepresent each element multiplication;The conv2 It is two-dimensional convolution operator;The rot180 is rotation 180 degree;It is described ' full ' refer to carry out complete convolution;The l expression layers Number;The i represents l layers of i-th of neuron node;The j represents l layers of j-th of neuron node;The b represents biasing, The bjRefer to the jth component of offset parameter;The δ represents the sensitivity of output neuron, that is, biases the change rate of b, describedRefer to j-th of component of l layers of sensitivity, the δl+1 jRefer to j-th of component of l+1 layers of sensitivity;The u, v are represented Export (u, v) position of maps;It is describedU, v refer to the element of (u, v) position of l layers of sensitivity;The E is error letter Number, expression formula are same as above, i.e.,The C represent label dimension (if the problem of two classification, then label Y can be denoted ash∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ { (0,1), (1,0) }, at this time C=2);It is describedIt represents The h dimensions of n-th of sample corresponding label;It is describedRepresent h-th of output of the corresponding network output of n-th of sample;The β is Weight parameter (general value is in [0,1]), the βjRefer to j-th of component of weight parameter;The down () is represented under one Sampling function;It is describedIt is l+1 layers of convolution kernel;It is describedJ-th of the neuron node of the output of l-1 layers for being;Institute State sl=Wlxl-1+bl, wherein W is weight parameter, and b is biasing,It is slJ-th of component;
Step C:The combination of the automatic learning characteristic map of CNN, then j-th of feature map be combined as:
s.t.∑iαij0≤α of=1, andij≤1.
Wherein, symbol * represents convolution operator;The l represents the number of plies;The i represents l layers of i-th of neuron node; The j represents l layers of j-th of neuron node;The f is activation primitive, takes sigmoid functions hereAs Activation primitive, e represent Euler's numbers 2.718281828, exIt is exactly exponential function;It is describedIt is i-th point of l-1 layers of output Amount, it is describedRefer to j-th of component of l layers of output;The NinRepresent the map numbers of input;It is describedIt is convolution kernel;Institute It statesIt is biasing;The αijRepresent l-1 layer when exporting map as l layers of input, l-1 layers obtain j-th output map's Wherein i-th weights for inputting map or contribution;
(2) focal area is automatically identified using the feature combination Softmax extracted in step (1), exports the general of segmentation Rate figure determines the model divided automatically;As soon as specific Softmax identification process is exactly given sample, a probability is exported Value, what which represented is that this sample belongs to several probability of classification, and loss function is:
Wherein, the i refers to i-th of sample;The j refers to jth class;The l refers to l classes;The m represents shared m A sample, m value ranges are arbitrary positive integer;The c represents that these samples can be divided into c classes in total, and c value ranges is arbitrarily just Integer;It is describedIt is a matrix, is the parameter corresponding to a classification, i.e. weight and biasing per a line;The θT jIt is Refer to the transposition of the parameter vector of jth class, the θT lRefer to the transposition of the parameter vector of l classes, the θijRefer to parameter matrix The element of i-th row jth;1 { } is an indicative function, i.e., when the value in braces is true, the result of the function is 1, otherwise as a result 0;The λ is the parameter for balancing fidelity item (first item) and regular terms (Section 2), and λ takes positive number here (its size is adjusted according to experimental result);The J (θ) refers to the loss function of system;The e represents Euler's numbers 2.718281828 exIt is exactly exponential function;The T is the transposition operator during representing matrix calculates;Log represents natural logrithm, I.e. using Euler's numbers as the logarithm at bottom;x(i)It is the i-th dimension of input vector;y(i)It is the i-th dimension of each sample label;Then ladder is utilized Degree solves:
Wherein, the θT j、i、j、c、l、θT lIt is respectively identical meaning with what is represented in above-mentioned loss function J (θ);The m represents to share m sample;It is describedIt is a square Battle array is the parameter corresponding to a classification, i.e. weight and biasing per a line;The θjRefer to the parameter corresponding to jth class;It is described 1 { } is an indicative function, i.e., when the value in braces is true, the result of the function is 1, otherwise as a result 0;Institute It is the parameter for balancing fidelity item (first item) and regular terms (Section 2) to state λ, and λ takes positive number (to adjust it according to experimental result here Size);The J (θ) refers to the loss function of system;It is J (θ) derived function;The e represents Euler's numbers 2.718281828 exIt is exactly exponential function;The T is the transposition operator during representing matrix calculates;Log represents natural logrithm, I.e. using Euler's numbers as the logarithm at bottom;x(i)It is the i-th dimension of input vector;y(i)It is the i-th dimension of each sample label;(used here as Be a kind of new Softmax graders, i.e., the Softmax graders of only two classification, for a medical image, root The probability provided according to softmax can obtain a probability graph for distinguishing all focal areas and non-focal area, root Figure can obtain the coarse segmentation to focal area accordingly)
(3) using the medical image of the automatic divided ownerships of SegCNN, that is, focal area and non-focal area is distinguished, is found The boundary of focal area, and the lesion shape being partitioned into is refined, it is point refined using the method that figure is cut here It cuts, is exactly specifically:Remember I:X ∈ V → R is are defined on regionOn 2D ultrasound image datas, S is all pixels in V The set of point, Nx is the 6- neighborhood point sets of pixel x;Assuming that lx∈ { 0,1 } is the label of pixel x, wherein 0 and 1 difference It represents the pixel and belongs to background (non-focal area) and prospect (focal area);Then need the energy found below minimization general Tally set l={ the l of letterx, x ∈ S },
WhereinParameter lambda is used for adjusting data penalty term ED(l) and boundary penalty term EB(l) balance between, λ value ranges are arbitrary real number;The V refers to the regional extent of image;Area item Dx(lx) for retouching State the similarity of pixel x and prospect or background;Edge detection function Bxy(x, y) features not connecting between pixel x and y Continuous property, andβ is constant term, and the I (x) refers to the gray value at pixel x on image, institute It states I (y) and refers to gray value on image at pixel y;Next, define a gray threshold function:
Wherein, the ζ refers to pixel minimum gradation value in focal area, and the η refers to the maximum of pixel in focal area Gray value;The gray value interval of lesion thus can be roughly estimated from initial focal areaDefinition is by one group of spy The part characterization item that sign distribution is formed, the feature of selection have the gray value I (x) of image, improved local binary patternsWith Local gray level variance VARP,r;These features are combined into a union featureτ, P, r are Normal number;Here
Wherein Ip(p=0,1 ..., P-1) is corresponding to be generally evenly distributed in using c ∈ Ω as the center of circle, and r is on the circle of radius The gray value of P point, IcIt is the gray value of circle centre position;The ImRefer to using c ∈ Ω as the center of circle, r is P point on the circle of radius The mean value of gray value, the sign refer to sign function, and when x is more than 0, sign (x) is more than 0, and otherwise sign (x) is less than 0;H(x) It is Heaviside functions, i.e.,
NoteAccumulation histogram for ith features of the pixel x in local neighborhood O (x);It is that ith feature exists Average accumulated histogram in initialization area, variance are denoted asThen it locally characterizes item and can be defined asW1() is one-dimensional L1Wasserstein distances;The focal zone that last combination S egCNN is obtained The segmentation probability graph L (x) in domain, gray threshold function F (x) and part characterization P (x), obtain data item expression formula Dx(lx) be:
Dx(lx)=max (- R (x), 0) lx+max(R(x),0)(1-lx)
Hereγ is normal number;The max refers to take maximum Value;So as to which the figure for having obtained that focal area can be carried out refinement segmentation cuts model, using this figure cut model can to by The focal area that SegCNN models obtain carries out refinement segmentation.
Compared with prior art, the beneficial effects of the invention are as follows:
The present invention can not be partitioned into focal area automatically only by depth convolutional neural networks, compensate for based on activity Profile etc. cannot solve the problems, such as the deficiency of weak boundary, and can learn to extract valuable feature combination automatically, avoid The complexity of artificial selected characteristic, the feature extracted in this way are more advantageous to finding the main rule information of lesion, and right Ultrasonic image feature is classified, and can objectively quantify main clinical medicine index, and it is good pernicious to improve diagnosis lesion Accuracy rate, and obtain the adaptability of height.
Description of the drawings
Fig. 1 is the flow chart that medical image features are understood based on depth convolutional neural networks method.
Fig. 2 is the raw ultrasound image of lesion used in embodiment.
Fig. 3 is the mask pictures of focal area in Fig. 2 that expert draws.
Fig. 4 is the raw ultrasound image of lesion in embodiment.
Fig. 5 is the effect picture for being partitioned into Fig. 4 focal areas automatically using SegCNN.
Specific embodiment
The present invention is described in further detail with specific embodiment below in conjunction with the accompanying drawings:
The following examples can make the professional technician of this profession that the present invention be more fully understood, but not with any side The formula limitation present invention.
As shown in Figure 1, a kind of assistant diagnosis system that medical image features are understood based on deep learning method, including following Step:
First, the medical image data of lesion is read:
The medical image of lesion is read, includes the image and at least 10000 pernicious lesions of at least 10000 benign lesions Image;Image can be picture format or the dicom pictures of standard.
2nd, medical image is pre-processed:
The lesion image that process one is read first is carried out image gray processing, and is removed using the gray value of surrounding pixel point Doctor is to measure the label that tubercle correlative is done in ultrasonoscopy, recycles gaussian filtering denoising, finally utilizes grey level histogram Equalization enhancing contrast, obtains pretreated enhancing image.
3rd, image is chosen, establishes first convolutional neural networks framework, i.e. CNN (convolutional neural Network), automatic study is partitioned into focal area, referred to as area-of-interest, i.e. ROI (region of interest), and right Lesion shape is refined.Specifically include following step:
1st step:The pretreated enhancing image 20000 of selection process two is opened, the image each 10000 including good pernicious lesion ;
2nd step:To each pictures, area-of-interest, i.e. focal area, Ran Houtong are sketched out manually (by expert) first It crosses first CNN framework and trains automatic parted pattern, this CNN model is denoted as SegCNN;
The network structure that the SegCNN is made of 15 layers of convolutional layer, 4 layers of down-sampling layer;The convolution kernel of each convolutional layer Size is respectively:First layer is 13 × 13, and the second layer is 11 × 11 with third layer, and the 4th layer is 5 × 5, remaining each layer is 3 × 3; The step-length of convolutional layer is respectively:First convolutional layer is 2, remaining is all 1;The size of down-sampling layer is all 3 × 3, and step-length is all It is 2.
The specific method that automatic parted pattern SegCNN is trained by first CNN framework is:
(1) by the convolutional layer of CNN and the automatic learning characteristic of down-sampling layer, and feature is extracted, the specific steps are:
Step A:In a convolutional layer, the feature maps of last layer carries out convolution by a convolution kernel that can learn, so As soon as output feature map can be obtained by an activation primitive afterwards;Each output is one input of convolution nuclear convolution or combination The value of multiple convolution inputs (what we selected here is the value for combining the multiple maps that come in and go out of convolution):
Wherein, symbol * represents convolution operator;The l represents the number of plies;The i represents l-1 layers of i-th of neuron section Point;The j represents l layers of j-th of neuron node;The MjRepresent the set of the input maps of selection;It is describedRefer to L-1 layers of output, as l layers of input;The f is activation primitive, takes sigmoid functions hereAs sharp Function living, e represent Euler's numbers 2.718281828, exIt is exactly exponential function;The k is convolution operator;The b is biasing;It is each A output map can give an additional biasing b, but for a specific output map, convolution each inputs the convolution of maps Core is all different;
This step also needs to carry out gradient calculating, and to update sensitivity, how much sensitivity is for representing b variations, error meeting How much is variation:
Wherein, the l represents the number of plies;The j represents l layers of j-th of neuron node;It is describedRepresent each element phase Multiply;The δ represents the sensitivity of output neuron, that is, biases the change rate of b;The sl=Wlxl-1+bl, xl-1Refer to l-1 layers Output, W is weight, and b is biasing;The f is activation primitive, takes sigmoid functions hereAs activation letter Number, e represent Euler's numbers 2.718281828, exIt is exactly exponential function;F " (x) is the derived function of f (x) (i.e. if f takes sigmoid FunctionThen f'(x)=(1-f (x)) f (x));It is describedRepresent the weights that each layer is shared;Up () table Show that (if the decimation factor of down-sampling is n, up-sampling operation is exactly by each pixel level and hangs down for a up-sampling operation Nogata copies n times upwards, can thus restore original size);
Then it sums to all nodes in the sensitivity map in l layers, the quick gradient for calculating biasing b:
Wherein, the l represents the number of plies;The j represents l layers of j-th of neuron node;The b represents biasing;The δ It represents the sensitivity of output neuron, that is, biases the change rate of b;The u, v represent (u, v) position of output maps;The E is Error function, hereThe C represents the dimension of label, if the problem of two classification, then label is just Y can be denoted ash∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ { (0,1), (1,0) }, at this time C=2;It is describedRepresent n-th The h dimensions of a sample corresponding label;It is describedRepresent h-th of output of the corresponding network output of n-th of sample;
Finally using Back Propagation Algorithm, stochastic gradient descent is carried out to loss function, calculates the weights of convolution kernel:
Wherein, the W is weight parameter;The E is error function, andThe C represents label Dimension, if two classification the problem of, then label can be denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ { (0,1), (1,0) }, at this time C=2;It is describedRepresent the h dimensions of n-th of sample corresponding label;It is describedRepresent n-th of sample H-th of output of corresponding network output;The η is learning rate, i.e. step-length;Due to the weights much connected be it is shared, because This needs to seek gradient to the point with the associated connection of the weights to all, then to these ladders for a given weights Degree is summed:
Wherein, the l represents the number of plies;The i represents l layers of i-th of neuron node;The j represents j-th of l layers Neuron node;B represents biasing, and the δ represents the sensitivity of output neuron, that is, biases the change rate of b;The u, v are represented Export (u, v) position of maps;The E is error function, hereThe C represents the dimension of label, If the problem of two classification, then label can be denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈{(0,1),(1, 0) }, C=2 at this time;It is describedRepresent the h dimensions of n-th of sample corresponding label;It is describedRepresent the corresponding network of n-th of sample H-th of output of output;It is describedIt is convolution kernel;It is describedIt isIn element when convolution withBy element The patch of multiplication, i.e., all region units in all pictures identical with convolution kernel size, (u, v) position of output convolution map Value be by the patch and convolution kernel of last layer (u, v) positionBy the result of element multiplication;
Step B:Down-sampling layer has N number of input maps, just there is N number of output maps, and only each output map becomes smaller, Then have:
Wherein, the f is activation primitive, takes sigmoid functions hereAs activation primitive, e represents Europe Draw number 2.718281828, exIt is exactly exponential function;It is describedRepresent the weights that each layer is sharedRepresent the weights that each layer is shared; The down () represents a down-sampling function;It sums to all pixels of the block of the different nxn of input picture, in this way It (is exactly that each element of input picture is taken a fixed 3x3 size here that output image all reduces n times on two dimensions Block, the value that then wherein all elements are summed as the element in the output image, so that output image is in two dimensions 3 times are all reduced on degree);Each output map corresponds to an one's own weight parameter β (biasing of multiplying property) and an additivity Bias b;
By gradient descent method come undated parameter β and b:
Wherein, the f " (x) refers to the derivative of activation primitive f (x);The conv2 is two-dimensional convolution operator;It is described Rot180 is rotation 180 degree;It is described ' full ' refer to carry out complete convolution;The l represents the number of plies;The i represents the i-th of l layers A neuron node;The j represents l layers of j-th of neuron node;The b represents biasing;The δ represents output neuron Sensitivity, that is, bias b change rate;The u, v represent (u, v) position of output maps;The E is error function, expression Formula is same as above, i.e.,The C represents the dimension of label, if the problem of two classification, then label It is denoted as yh∈ { 0,1 }, C=1, can also be denoted as y at this timeh∈ { (0,1), (1,0) }, at this time C=2;It is describedRepresent n-th of sample The h dimensions of this corresponding label;It is describedRepresent h-th of output of the corresponding network output of n-th of sample;The β is weight ginseng Number (general value is in [0,1]);The down () represents a down-sampling function;It is describedIt is l+1 layers of convolution kernel;Institute It statesJ-th of the neuron node of the output of l-1 layers for being;The sl=Wlxl-1+bl, wherein being weight parameter, b is biasing,It is slJ-th of component.
Step C:The combination of the automatic learning characteristic map of CNN, then j-th of feature map be combined as:
s.t.∑iαij0≤α of=1, andij≤1.
Wherein, symbol * represents convolution operator;The l represents the number of plies;The i represents l layers of i-th of neuron node; The j represents l layers of j-th of neuron node;The f is activation primitive, takes sigmoid functions hereAs Activation primitive, e represent Euler's numbers 2.718281828, exIt is exactly exponential function;It is describedIt is i-th point of l-1 layers of output Amount;The NinRepresent the map numbers of input;It is describedIt is convolution kernel;It is describedIt is biasing;The αijRepresent l-1 layers of output When map is as l layers of input, the weights of l-1 layers of wherein i-th input map for obtaining j-th of output map or contribution;
(2) the feature combination Softmax extracted in (1) is utilized to automatically identify focal area, exports the probability graph of segmentation, Determine the model divided automatically;As soon as specific Softmax identification process is exactly given sample, a probability value is exported, it should What probability value represented is that this sample belongs to several probability of classification, and loss function is:
Wherein, the m represents to share m sample;The c represents that these samples can be divided into c classes in total;It is described It is a matrix, is the parameter corresponding to a classification, i.e. weight and biasing per a line;1 { } is an indicative letter Number, i.e., when the value in braces is true, the result of the function is 1, otherwise as a result 0;The λ is balance fidelity item (the One) with the parameter of regular terms (Section 2), here λ take positive number (its size is adjusted according to experimental result);The J (θ) refers to The loss function of system;The e represents Euler's numbers 2.718281828, exIt is exactly exponential function;The T is that representing matrix calculates In transposition operator;Log represents natural logrithm, i.e., using Euler's numbers as the logarithm at bottom;N represents weight and the dimension of offset parameter Degree;x(i)It is the i-th dimension of input vector;y(i)It is the i-th dimension of each sample label;Then it is solved using gradient:
Wherein,The m represents to share m sample;It is describedIt is one A matrix is the parameter corresponding to a classification, i.e. weight and biasing per a line;1 { } is an indicative function, i.e., When the value in braces is true, the result of the function is 1, otherwise as a result 0;The λ is balance fidelity item (first item) With the parameter of regular terms (Section 2), here λ take positive number (its size is adjusted according to experimental result);The J (θ) refers to system Loss function;It is J (θ) derived function;The e represents Euler's numbers 2.718281828, exIt is exactly exponential function;The T It is the transposition operator during representing matrix calculates;Log represents natural logrithm, i.e., using Euler's numbers as the logarithm at bottom;x(i)Be input to The i-th dimension of amount;y(i)It is the i-th dimension of each sample label;(used herein is a kind of new Softmax graders, that is, is only had The Softmax graders of two classification, for a medical image, the probability provided according to softmax can be obtained institute The probability graph that some focal areas are distinguished with non-focal area, the rough segmentation to focal area can have been obtained according to this figure It cuts;)
(3) using the medical image of the automatic divided ownerships of SegCNN, that is, focal area and non-focal area is distinguished, is found The boundary of focal area, and the lesion shape being partitioned into is refined, we are refined using the method that figure is cut here Divide, be exactly specifically:Remember I:X ∈ Ω → R is are defined on regionOn 2D ultrasound image datas, S is owns in Ω The set of pixel, Nx are the 6- neighborhood point sets of pixel x.Assuming that lx∈ { 0,1 } is the label of pixel x, wherein 0 and 1 The pixel is represented respectively belongs to background (non-focal area) and prospect (focal area).Then we need to find below minimization Energy functional tally set l={ lx, x ∈ S },
WhereinParameter lambda is used for adjusting data penalty term ED(l) and boundary penalty term EB (l) balance between.Area item Dx(lx) for describing the similarity of pixel x and prospect or background.Edge detection function Bxy(x, y) features the discontinuity between pixel x and y, andβ is constant term.It connects Get off, we also need to define a gray threshold function:
The gray value interval of lesion thus can be roughly estimated from initial focal areaDefinition is by one group The part characterization item that feature distribution is formed, the feature of selection have the gray value I (x) of image, improved local binary patterns With local gray variance VARP,r.These features are combined into a union feature
τ, P, r are normal numbers, here
Wherein Ip(p=0,1 ..., P-1) is corresponding to be generally evenly distributed in using c ∈ Ω as the center of circle, and r is on the circle of radius The gray value of P point, IcIt is the gray value of circle centre position.H (x) is Heaviside functions, i.e.,:
NoteAccumulation histogram for ith features of the pixel x in local neighborhood O (x).It is that ith feature exists Average accumulated histogram in initialization area, variance are denoted asThen it locally characterizes item and can be defined asW1() is one-dimensional L1Wasserstein distances.The focal zone that last combination S egCNN is obtained The segmentation probability graph L (x) in domain, gray threshold function F (x) and part characterization P (x) obtain data item expression formula Dx(lx) be,
Dx(lx)=max (- R (x), 0) lx+max(R(x),0)(1-lx)
Hereγ is normal number.So as to which we can be obtained by Figure cuts model, and refinement segmentation is carried out to focal area.
3rd step:It is applied to all lesion images using the SegCNN models that the 2nd step obtains, i.e., the 1st step is chosen 20000 images are divided automatically, then establish a figure and cut model, the focal area that SegCNN is obtained are carried out automatic Refinement segmentation.Finally obtain ROI, i.e., all good pernicious lesions.
4th, it establishes second convolutional neural networks framework and understands good pernicious focus characteristic automatically, by three SegCNN moulds of process The ROI (i.e. all good pernicious lesions) that type is partitioned into automatically is divided into p groups, this CNN model is denoted as the RecCNN established, Characteristic is normalized again.It is partitioned into lesion automatically using SegCNN models and then is carried using RecCNN models The feature of these lesions is taken, linear transformation is carried out to these features extracted, end value is made to be mapped to [0,1].
Wherein, the p value ranges are the positive integer more than or equal to 2;
The network structure that the RecCNN is made of 6 layers of convolutional layer, 4 layers of down-sampling layer, 3 layers of full articulamentum, three complete The neuron node number of linking layer is respectively 4096,4096,1;The size of the convolution kernel of each convolutional layer is respectively:First layer is 13 × 13, the second layer is 11 × 11 with third layer, and the 4th layer is 5 × 5, remaining each layer is 3 × 3;Step-length is respectively:First convolution Layer is 2, remaining is all 1;The size of down-sampling layer is all 3 × 3, and step-length is all 2.
5th, p-1 groups data in step 4 are selected and make training set, training set is used to train RecCNN models, remaining one group of work Test set, test set are used to test trained RecCNN models.
RecCNN models are trained using training set, for understanding medical image features, can automatically be split to all Focal area extraction feature, then analyzed.Specifically training process is:It extracts the method for feature and process three The method of extraction characteristic procedure is the same during SegCNN models are divided automatically, i.e., is all by respective each convolutional layer and pond Change layer extraction feature, the effect of this two classes functional layer be it is the same, calculation formula with update method be it is the same, still The automatic partitioning portion of SegCNN models is to be carried out at the same time extraction feature for non-focal area and focal area, in this process five The object of RecCNN models is just for focal area, and automatic partitioning portion is to be directed to non-focal area and focal area simultaneously Extract feature, and the convolution kernel size of RecCNN models and SegCNN models and pond window size and convolutional layer and The step sizes of pond layer are different with filling size setting, so mutual convolutional layer is different from pond layer sphere of action.
Then polytypic grader can be carried out by constructing one using Softmax, and the feature extracted is analyzed, This process is actually to solve the optimal value of a loss function, is usually optimization loss functionWherein, the i refers to i-th of sample;
The j refers to jth class;The l refers to l classes;The m represents shared m sample, and m value ranges is arbitrarily just Integer;The c represents that these samples can be divided into c classes in total, and c value ranges are arbitrary positive integer;It is describedIt is a square Battle array is the parameter corresponding to a classification, i.e. weight and biasing per a line;The θT jRefer to the transposition of the parameter vector of jth class, The θT lRefer to the transposition of the parameter vector of l classes, the θijRefer to the element of the i-th row jth of parameter matrix;1 { } It is an indicative function, i.e., when the value in braces is true, the result of the function is 1, otherwise as a result 0;The λ is The parameter of fidelity item (first item) and regular terms (Section 2) is balanced, λ takes positive number (adjusting its size according to experimental result) here; The e represents Euler's numbers 2.718281828, exIt is exactly exponential function;The T is the transposition operator during representing matrix calculates; Log represents natural logrithm, i.e., using Euler's numbers as the logarithm at bottom;x(i)It is the i-th dimension of input vector;y(i)It is each sample label I-th dimension;The classification number c of Softmax graders be equal to 5 (represent respectively the echo characteristics of lesion, edge feature, structure feature, Calcification feature, five category feature of aspect ratio features), each class has different subclasses, and wherein echo characteristics has high echo, waits back Sound, low echo or extremely low echo, four class of echoless, edge feature have finishing and two class of not finishing, and structure feature has real property, real Based on property, based on capsule, four class of capsule, calcification feature has Microcalcification and no two class of Microcalcification, aspect ratio features have more than 1 with it is small In equal to 1 liang class;The feature vector that can be obtained by output by stochastic gradient descent method is belonging respectively to the son of which category feature The probability of class;Detailed process is:It is similar with Forecasting Methodology in the automatic cutting procedure of SegCNN models in process three, is all excellent Change a loss function, only here be a polytypic Softmax function, be subordinate to according to output feature vector As soon as the probabilistic forecasting of which category feature goes out the probability which category feature a tag along sort belongs to get the feature to the lesion, also right One focus characteristic is classified, and can further obtain the corresponding feature type of good pernicious lesion.
6th, step 5 is repeated, does p crosscheck, i.e., the p group data divided for process four select one group not every time Same data make test set, and remaining p-1 groups data make training set, until each group of data all made test set.
By p crosscheck of process five and process six, the power of convolutional neural networks model RecCNN can be preserved every time Weight and offset parameter, according to the accuracy rate assessment result on test set, the calculation formula of accuracy rate is here Wherein AC represents accuracy rate;The correct sample number of TN presentation classes;The sample number of FN presentation class mistakes.Each accuracy rate with The average value of p times is little, just takes one group of weight and parameter of the offset parameter as RecCNN that wherein accuracy rate is slightly higher, i.e., The optimal parameter of RecCNN is finally obtained, that is, has trained RecCNN models, so as to finally determine based on deep learning Method understands the assistant diagnosis system of medical image feature.
The lesion image for needing to understand is input to this assistant diagnosis system, you can five category features of the lesion are obtained, And it is analyzed per category feature, and then good pernicious lesion can be diagnosed according to these features.
Fig. 2, Fig. 3 are to illustrate to be used for training the raw ultrasound image of the lesion of SegCNN models and corresponding disease in experiment The mask pictures in stove region;Fig. 4, Fig. 5 illustrate the raw ultrasound image of a lesion with being partitioned into disease automatically using SegCNN The effect picture of stove region mask.
Finally it should be noted that listed above is only specific embodiments of the present invention.It is clear that the invention is not restricted to Above example can also have many variations.Those of ordinary skill in the art can directly lead from present disclosure All deformations for going out or associating, are considered as protection scope of the present invention.

Claims (2)

1. the assistant diagnosis system of medical image features is understood based on deep learning method, which is characterized in that including following processes:
First, the medical image data of lesion is read:
The medical image of lesion is read, the figure of image and at least 10000 pernicious lesions including at least 10000 benign lesions Picture;
2nd, medical image is pre-processed:
The lesion image that process one is read first carries out image gray processing, and removes ultrasound using the gray value of surrounding pixel point Doctor is to measure the label that tubercle correlative is done in image, recycles gaussian filtering denoising, finally utilizes gray-level histogram equalization Change enhancing contrast, obtain pretreated enhancing image;
3rd, image is chosen, establishes first convolutional neural networks framework, i.e. CNN, automatic study is partitioned into focal area, referred to as Area-of-interest, i.e. ROI, and lesion shape is refined;Specifically include following step:
1st step:The pretreated enhancing image 20000 of selection process two is opened, each 10000 of the image including good pernicious lesion;
2nd step:To each pictures, area-of-interest, i.e. focal area are sketched out manually first;Then pass through first CNN Framework trains automatic parted pattern, and it is SegCNN models to remember this automatic parted pattern;
The network structure that the SegCNN models are made of 15 layers of convolutional layer, 4 layers of down-sampling layer;The convolution kernel of each convolutional layer Size is respectively:First layer is 13 × 13, and the second layer is 11 × 11 with third layer, and the 4th layer is 5 × 5, remaining each layer is 3 × 3; The step-length of convolutional layer is respectively:First convolutional layer is 2, remaining is all 1;The size of down-sampling layer is all 3 × 3, and step-length is all It is 2;
3rd step:It is applied to all lesion images, i.e., 20000 chosen to the 1st step using the SegCNN models that the 2nd step obtains It opens image automatically to be divided, then establishes a figure and cut model, the focal area that SegCNN models obtain is carried out automatic Refinement segmentation, finally obtains ROI, i.e., all good pernicious lesions;
4th, the CNN models for establishing second convolutional neural networks framework understand good pernicious focus characteristic automatically, remember this CNN mould Type is RecCNN models;
The network structure that the RecCNN models are made of 6 layers of convolutional layer, 4 layers of down-sampling layer, 3 layers of full articulamentum;3 connect entirely The neuron node number for connecing layer is respectively 4096,4096,1;The size of the convolution kernel of each convolutional layer is respectively:First layer for 13 × 13, the second layer is 11 × 11 with third layer, and the 4th layer is 5 × 5, remaining each layer is 3 × 3;The step-length of convolutional layer is respectively:First A convolutional layer is 2, remaining is all 1;The size of down-sampling layer is all 3 × 3, and step-length is all 2;
The ROI that three SegCNN models of process are partitioned into automatically is divided into p groups, for training RecCNN models;The p is not Positive integer less than 2;
5th, p-1 groups data in process four are selected and make training set, training set is used to train RecCNN models, data one group remaining Make test set, test set is used to test trained RecCNN models;
RecCNN models are trained using training set, it, can be to all diseases split automatically for understanding medical image features Stove extracted region feature;
Then polytypic grader can be carried out by constructing one using Softmax, and the feature extracted is analyzed, this Process is to solve the optimal value of a loss function, that is, optimizes loss function
Wherein, the i refers to i-th of sample;The j refers to jth class;Described 1 refers to the 1st class;The m represents to share m sample This, m value ranges are arbitrary positive integer;The c represents that these samples can be divided into c classes in total, and c value ranges are arbitrary just whole Number;It is describedIt is a matrix, is the parameter corresponding to a classification, i.e. weight and biasing per a line;The θT jRefer to The transposition of the parameter vector of jth class, the θT 1Refer to the transposition of the parameter vector of the 1st class, the θijRefer to the of parameter matrix The element of i row jth;1 { } is an indicative function, i.e., when the value in braces is true, the result of the function is 1, Otherwise as a result 0;The λ is the parameter for balancing fidelity item and regular terms, and λ takes positive number here;The e represents Euler's numbers 2.718281828 exIt is exactly exponential function;The T is the transposition operator during representing matrix calculates;Log represents natural logrithm, I.e. using Euler's numbers as the logarithm at bottom;x(i)It is the i-th dimension of input vector;y(i)It is the i-th dimension of each sample label;
The classification number c of Softmax graders is equal to 5, i.e., represents echo characteristics, edge feature, structure feature, the calcium of lesion respectively Change feature, five category feature of aspect ratio features;Each class has different subclasses, and echo characteristics has four subclasses:High echo, etc. return Sound, low echo or extremely low echo, echoless;Edge feature has two subclasses:Finishing, not finishing;Structure feature has four subclasses:It is real Based on property, reality, based on capsule, capsule;Calcification feature has two subclasses:Microcalcification, without Microcalcification;Aspect ratio features have two sons Class:More than 1, less than or equal to 1;Then, it is special which class is the feature vector exported by stochastic gradient descent method be belonging respectively to The probability of the subclass of sign;
6th, repetitive process five, do p crosscheck, i.e., the p group data divided for process four are selected a different set of every time Data make test set, and remaining p-1 groups data make training set, until each group of data all made test set;
By p crosscheck, the weight and offset parameter of convolutional neural networks model RecCNN can be preserved every time, and according to Accuracy rate assessment result on test set;The calculation formula of accuracy rate isWherein AC represents accuracy rate, TN tables Show the correct sample number of classification, the sample number of FN presentation class mistakes;It finally takes in the highest primary crosscheck of accuracy rate Weight and offset parameter as the optimal parameter of RecCNN models, have obtained trained RecCNN models, i.e., have finally determined The assistant diagnosis system of medical image feature is understood based on deep learning method;
The lesion image for needing to understand is input to this auxiliary diagnosis based on deep learning method deciphering medical image feature System, you can obtain the feature of the lesion, and analyzed per category feature it, and then good evil can be diagnosed according to these features Venereal disease stove.
2. the assistant diagnosis system according to claim 1 that medical image features are understood based on deep learning method, special Sign is, the 2nd step and the 3rd step in the process three, specially:
(1) by the convolutional layer of CNN and the automatic learning characteristic of down-sampling layer, and feature is extracted, the specific steps are:
Step A:In a convolutional layer, the feature maps of last layer carries out convolution, Ran Houtong by a convolution kernel that can learn As soon as crossing an activation primitive, output feature map can be obtained;Each output is one input of convolution nuclear convolution or combines multiple The value of convolution input:
Wherein, symbol * represents convolution operator;The l represents the number of plies;The i represents l-1 layers of i-th of neuron node;Institute State j-th of neuron node that j represents l layers;The MiRepresent the set of the input maps of selection;It is describedRefer to l-1 layers Output, as l layers of input, the x1 jRefer to j-th of component of the 1st layer of output;The f is activation primitive, is taken here Sigmoid functionsAs activation primitive, e represents Euler's numbers 2.718281828, exIt is exactly exponential function;The k It is convolution operator, the k1 ijRefer to the element of (i, j) position of level 1 volume product core;The b is to bias, the b1 jRefer to the 1st J-th of component of layer biasing;Each output map can give an additional biasing b, but specifically export map for one, The convolution kernel that convolution each inputs maps is different;
Calculated by gradient, to update sensitivity, sensitivity for how much representing b variations, error can change how much:
Wherein, the l represents the number of plies;The j represents l layers of j-th of neuron node;The o represents each element multiplication;Institute The sensitivity that δ represents output neuron is stated, that is, biases the change rate of b, the δ1 jRefer to j-th of component of the 1st layer of sensitivity, institute State δ1+1 jRefer to j-th of component of 1+1 layers of sensitivity;The sl=Wlxl-1+b1, xl-1Refer to l-1 layers of output, W is power Weight, b is biases, the s1 jRefer to the 1st layer of sl=Wlxl-1+blJ-th of component, the W1Refer to the 1st layer of weight parameter, institute State b1Refer to the 1st layer of biasing;The f is activation primitive, takes sigmoid functions hereAs activation primitive, e Represent Euler's numbers 2.718281828, exIt is exactly exponential function;F " (x) is the derived function of f (x);
Then it sums to all nodes in the sensitivity map in l layers, the quick gradient for calculating biasing b:
Wherein, the l represents the number of plies;The j represents l layers of j-th of neuron node;The b represents biasing, the b1 jRefer to J-th of component of the 1st layer of biasing;The δ represents the sensitivity of output neuron, that is, biases the change rate of b;The u, v table Show (u, v) position of output maps;(the δ1 j) u, v refer to the element of the 1st layer of sensitivity (u, v) position;The E is error letter Number, hereThe C represents the dimension of label;It is describedRepresent the h of n-th of sample corresponding label Dimension;It is describedRepresent h-th of output of the corresponding network output of n-th of sample;
Finally using Back Propagation Algorithm, stochastic gradient descent is carried out to loss function, calculates the weights of convolution kernel:
Wherein, the W is weight parameter, and the Δ W refers to the knots modification of weight parameter;The W1Refer to the 1st layer of weight ginseng Number;The E is error function, andThe C represents the dimension of label;It is describedRepresent n-th of sample The h dimensions of corresponding label;It is describedRepresent h-th of output of the corresponding network output of n-th of sample;The η is learning rate, i.e., Step-length;Since the weights much connected are shared, for a given weights, need have connection with the weights to all The connection of system seeks gradient to the point, then sums to these gradients:
Wherein, the l represents the number of plies;The i represents l layers of i-th of neuron node;The j represents l layers of j-th of nerve First node;B represents biasing, and the δ represents the sensitivity of output neuron, that is, biases the change rate of b;The u, v represent output (u, v) position of maps;(the δ1 j) u, v refer to the 1st layer of sensitivity (u, v) position element;The E is error function, This is hopedThe C represents the dimension of label;It is describedRepresent the h dimensions of n-th of sample corresponding label;Institute It statesRepresent h-th of output of the corresponding network output of n-th of sample;It is describedIt is convolution kernel;It is describedIt isIn Element when convolution withBy the patch of element multiplication, i.e., all areas in all pictures identical with convolution kernel size Domain block, the value of (u, v) position of output convolution map is by the patch and convolution kernel of last layer (u, v) positionBy element phase The result multiplied;
Step B:Down-sampling layer has N number of input maps, just there is N number of output maps, and only each output map becomes smaller, then has:
Wherein, the X1 jRefer to j-th of component of the 1st layer of output, the X1-1 jRefer to j-th point of 1-1 layers of output Amount;The f is activation primitive, takes sigmoid functions hereAs activation primitive, e represents Euler's numbers 2.718281828 exIt is exactly exponential function;It is describedRepresent the weights that each layer is shared;The down () represents a down-sampling Function;The b1 jRefer to j-th of component of the 1st layer of biasing;The all pixels of the block of different n × n of input picture are asked With export image in this way and all reduce n times on two dimensions, n value ranges are positive integer;Each output map corresponding one An a one's own weight parameter β and additivity biasing b;
By gradient descent method come undated parameter β and b:
Wherein, the f " (x) refers to the derivative of activation primitive f (x);The o represents each element multiplication;The conv2 is two Tie up convolution operator;The rot180 is rotation 180 degree;It is described ' full ' refer to carry out complete convolution;The l represents the number of plies;Institute State i-th of neuron node that i represents l layers;The j represents l layers of j-th of neuron node;The b represents biasing, described bjRefer to the jth component of offset parameter;The δ represents the sensitivity of output neuron, that is, biases the change rate of b, the δ1 jIt is Refer to j-th of component of the 1st layer of sensitivity, the δ1+1 jRefer to j-th of component of 1+1 layers of sensitivity;The u, v represent output (u, v) position of maps;(the δ1 j) u, v refer to 1 layer of sensitivity (u, v) position element;The E is error function, table It is same as above up to formula, i.e.,The C represents the dimension of label;It is describedRepresent n-th sample corresponding label H is tieed up;It is describedRepresent h-th of output of the corresponding network output of n-th of sample;The β is weight parameter, the βjRefer to J-th of component of weight parameter;The down () represents a down-sampling function;It is describedIt is l+1 layers of convolution kernel;Institute It statesJ-th of the neuron node of the output of l-1 layers for being;The sl=Wlxl-1+bl, wherein W is weight parameter, and b is inclined It puts,It is slJ-th of component;
Step C:The combination of the automatic learning characteristic map of CNN, then j-th of feature map be combined as:
s.t.∑iαij0≤α of=1, andij≤1.
Wherein, symbol * represents convolution operator;The l represents the number of plies;The i represents l layers of i-th of neuron node;It is described J represents l layers of j-th of neuron node;The f is activation primitive, takes sigmoid functions hereAs activation Function, e represent Euler's numbers 2.718281828, e°It is exactly exponential function;It is describedBe l-1 layers output i-th of component, institute State X1 jRefer to j-th of component of the 1st layer of output;The NinRepresent the map numbers of input;It is describedIt is convolution kernel;It is describedIt is Biasing;The αijRepresent l-1 layer when exporting map as l layers of input, l-1 layers obtain wherein the i-th of j-th of output map The weights of a input map or contribution;
(2) focal area is automatically identified using the feature combination Softmax extracted in step (1), exports the probability graph of segmentation, Determine the model divided automatically;As soon as specific Softmax identification process is exactly given sample, a probability value is exported, it should What probability value represented is that this sample belongs to several probability of classification, and loss function is:
Wherein, the i refers to i-th of sample;The j refers to jth class;Described 1 refers to the 1st class;The m represents to share m sample This, m value ranges are arbitrary positive integer;The c represents that these samples can be divided into c classes in total, and c value ranges are arbitrary just whole Number;It is describedIt is a matrix, is the parameter corresponding to a classification, i.e. weight and biasing per a line;The θT jRefer to The transposition of the parameter vector of jth class, the θT 1Refer to the transposition of the parameter vector of the 1st class, the θijRefer to the of parameter matrix The element of i row jth;1 { } is an indicative function, i.e., when the value in braces is true, the result of the function is 1, Otherwise as a result 0;The λ is the parameter for balancing fidelity item and regular terms, and λ takes positive number here;The J (θ) refers to system Loss function;The e represents Euler's numbers 2.718281828, exIt is exactly exponential function;The T is turned during representing matrix calculates Put operator;Log represents natural logrithm, i.e., using Euler's numbers as the logarithm at bottom;x(i)It is the i-th dimension of input vector;y(i)It is each The i-th dimension of sample label;Then it is solved using gradient:
Wherein, the θT j、i、j、c、l、θT 1It is respectively identical meaning with what is represented in above-mentioned loss function J (θ);The m represents to share m sample;It is describedIt is a square Battle array is the parameter corresponding to a classification, i.e. weight and biasing per a line;The θjRefer to the parameter corresponding to jth class;It is described 1 { } is an indicative function, i.e., when the value in braces is true, the result of the function is 1, otherwise as a result 0;Institute It is the parameter for balancing fidelity item and regular terms to state λ, and λ takes positive number here;The J (θ) refers to the loss function of system; It is J (θ) derived function;The e represents Euler's numbers 2.718281828, exIt is exactly exponential function;The T is during representing matrix calculates Transposition operator;Log represents natural logrithm, i.e., using Euler's numbers as the logarithm at bottom;x(i)It is the i-th dimension of input vector;y(i)It is The i-th dimension of each sample label;
(3) using the medical image of the automatic divided ownerships of SegCNN, that is, focal area and non-focal area is distinguished, finds lesion The boundary in region, and the lesion shape being partitioned into is refined, it is the segmentation refined using the method that figure is cut here, tool Body is exactly:Remember I:X ∈ V → R is are defined on regionOn 2D ultrasound image datas, S be V in all pixels point collection It closes, Nx is the 6- neighborhood point sets of pixel x;Assuming that 1x∈ { 0,1 } is the label of pixel x, wherein 0 and 1 represents this respectively Pixel belongs to background and prospect;Then need the tally set 1={ l of the energy functional below searching minimizationx, x ∈ S },
Wherein,Parameter lambda is used for adjusting data penalty term ED(l) and boundary penalty term EB (l) balance between, λ value ranges are arbitrary real number;The V refers to the regional extent of image;Area item Dx(lx) for describing The similarity of pixel x and prospect or background;Edge detection function Bxy(x, y) features discontinuous between pixel x and y Property, andβ is constant term, and the I (x) refers to the gray value at pixel x on image, described I (y) refers to the gray value at pixel y on image;Next, define a gray threshold function:
Wherein, the ζ refers to pixel minimum gradation value in focal area, and the η refers to the maximum gray scale of pixel in focal area Value;The gray value interval of lesion thus can be roughly estimated from initial focal areaDefinition is by one group of feature point The part characterization item that cloth is formed, the feature of selection have the gray value I (x) of image, improved local binary patternsAnd part Gray variance VARP, r;These features are combined into a union featureτ, P, r are normal Number;Here
Wherein Ip(p=0,1 ..., P-1) is corresponding to be generally evenly distributed in using c ∈ Ω as the center of circle, and r is P point on the circle of radius Gray value, IcIt is the gray value of circle centre position;The ImRefer to using c ∈ Ω as the center of circle, r is P gray values on the circle of radius Mean value, the sign refers to sign function, and when x is more than 0, sign (x) is more than 0, and otherwise sign (x) is less than 0;H (x) is Heaviside functions, i.e.,
NoteAccumulation histogram for ith features of the pixel x in local neighborhood 0 (x);It is that ith feature is initializing area Average accumulated histogram in domain, variance are denoted asThen it locally characterizes item and can be defined asW1 () is one-dimensional L1Wasserstein distances;The segmentation probability graph L (x) of focal area that last combination S egCNN is obtained, Gray threshold function F (x) and part characterization P (x), obtains data item expression formula Dx(lx) be:
Dx(lx)=max (- R (x), 0) lx+ max (R (x), 0) (1-lx)
Hereγ is normal number;The max refers to be maximized;From And the figure for having obtained that focal area can be carried out refinement segmentation cuts model, cutting model using this figure can be to by SegCNN mould The focal area that type obtains carries out refinement segmentation.
CN201810103893.0A 2018-02-01 2018-02-01 The assistant diagnosis system of medical image features is understood based on deep learning method Pending CN108257135A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810103893.0A CN108257135A (en) 2018-02-01 2018-02-01 The assistant diagnosis system of medical image features is understood based on deep learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810103893.0A CN108257135A (en) 2018-02-01 2018-02-01 The assistant diagnosis system of medical image features is understood based on deep learning method

Publications (1)

Publication Number Publication Date
CN108257135A true CN108257135A (en) 2018-07-06

Family

ID=62743606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810103893.0A Pending CN108257135A (en) 2018-02-01 2018-02-01 The assistant diagnosis system of medical image features is understood based on deep learning method

Country Status (1)

Country Link
CN (1) CN108257135A (en)

Cited By (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109141847A (en) * 2018-07-20 2019-01-04 上海工程技术大学 A kind of aircraft system faults diagnostic method based on MSCNN deep learning
CN109191425A (en) * 2018-07-23 2019-01-11 中国科学院自动化研究所 medical image analysis method
CN109376798A (en) * 2018-11-23 2019-02-22 东南大学 A kind of classification method based on convolutional neural networks titanium dioxide lattice phase
CN109394317A (en) * 2018-12-14 2019-03-01 清华大学 Puncture path device for planning and method
CN109584211A (en) * 2018-10-31 2019-04-05 南开大学 A kind of vision automatic testing method of animal oocyte polar body
CN109685807A (en) * 2018-11-16 2019-04-26 广州市番禺区中心医院(广州市番禺区人民医院、广州市番禺区心血管疾病研究所) Lower-limb deep veins thrombus automatic division method and system based on deep learning
CN109686444A (en) * 2018-12-27 2019-04-26 上海联影智能医疗科技有限公司 System and method for medical image classification
CN109785300A (en) * 2018-12-27 2019-05-21 华南理工大学 A kind of cancer medical image processing method, system, device and storage medium
CN109785306A (en) * 2019-01-09 2019-05-21 上海联影医疗科技有限公司 Organ delineation method, device, computer equipment and storage medium
CN109872334A (en) * 2019-02-26 2019-06-11 电信科学技术研究院有限公司 A kind of image partition method and device
CN109949307A (en) * 2019-02-27 2019-06-28 昆明理工大学 A method of the image segmentation based on principal component analysis
CN109948619A (en) * 2019-03-12 2019-06-28 北京羽医甘蓝信息技术有限公司 The method and apparatus of whole scenery piece dental caries identification based on deep learning
CN109949271A (en) * 2019-02-14 2019-06-28 腾讯科技(深圳)有限公司 A kind of detection method based on medical image, the method and device of model training
CN109961427A (en) * 2019-03-12 2019-07-02 北京羽医甘蓝信息技术有限公司 The method and apparatus of whole scenery piece periapical inflammation identification based on deep learning
CN109961838A (en) * 2019-03-04 2019-07-02 浙江工业大学 A kind of ultrasonic image chronic kidney disease auxiliary screening method based on deep learning
CN109978841A (en) * 2019-03-12 2019-07-05 北京羽医甘蓝信息技术有限公司 The method and apparatus of whole scenery piece impacted tooth identification based on deep learning
CN110136103A (en) * 2019-04-24 2019-08-16 平安科技(深圳)有限公司 Medical image means of interpretation, device, computer equipment and storage medium
CN110348500A (en) * 2019-06-30 2019-10-18 浙江大学 Sleep disturbance aided diagnosis method based on deep learning and infrared thermal imagery
CN110348319A (en) * 2019-06-18 2019-10-18 武汉大学 A kind of face method for anti-counterfeit merged based on face depth information and edge image
CN110363240A (en) * 2019-07-05 2019-10-22 安徽威奥曼机器人有限公司 A kind of medical image classification method and system
CN110363226A (en) * 2019-06-21 2019-10-22 平安科技(深圳)有限公司 Ophthalmology disease classifying identification method, device and medium based on random forest
CN110648311A (en) * 2019-09-03 2020-01-03 南开大学 Acne image focus segmentation and counting network model based on multitask learning
CN110659692A (en) * 2019-09-26 2020-01-07 重庆大学 Pathological image automatic labeling method based on reinforcement learning and deep neural network
CN110738671A (en) * 2019-10-14 2020-01-31 浙江德尚韵兴医疗科技有限公司 method for automatically segmenting breast calcifications based on deep learning
CN110751621A (en) * 2019-09-05 2020-02-04 五邑大学 Breast cancer auxiliary diagnosis method and device based on deep convolutional neural network
CN110751203A (en) * 2019-10-16 2020-02-04 山东浪潮人工智能研究院有限公司 Feature extraction method and system based on deep marker learning
CN110796656A (en) * 2019-11-01 2020-02-14 上海联影智能医疗科技有限公司 Image detection method, image detection device, computer equipment and storage medium
CN110859642A (en) * 2019-11-26 2020-03-06 北京华医共享医疗科技有限公司 Method, device, equipment and storage medium for realizing medical image auxiliary diagnosis based on AlexNet network model
CN110910371A (en) * 2019-11-22 2020-03-24 北京理工大学 Liver tumor automatic classification method and device based on physiological indexes and image fusion
CN110930392A (en) * 2019-11-26 2020-03-27 北京华医共享医疗科技有限公司 Method, device, equipment and storage medium for realizing medical image auxiliary diagnosis based on GoogLeNet network model
CN110992338A (en) * 2019-11-28 2020-04-10 华中科技大学 Primary stove transfer auxiliary diagnosis system
CN111008976A (en) * 2019-12-02 2020-04-14 中南大学 PET image screening method and device
CN111161256A (en) * 2019-12-31 2020-05-15 北京推想科技有限公司 Image segmentation method, image segmentation device, storage medium, and electronic apparatus
CN111179227A (en) * 2019-12-16 2020-05-19 西北工业大学 Mammary gland ultrasonic image quality evaluation method based on auxiliary diagnosis and subjective aesthetics
CN111260641A (en) * 2020-01-21 2020-06-09 珠海威泓医疗科技有限公司 Palm ultrasonic imaging system and method based on artificial intelligence
CN111340209A (en) * 2020-02-18 2020-06-26 北京推想科技有限公司 Network model training method, image segmentation method and focus positioning method
CN111402270A (en) * 2020-03-17 2020-07-10 北京青燕祥云科技有限公司 Repeatable intra-pulmonary grinding glass and method for segmenting hypo-solid nodules
CN111461158A (en) * 2019-05-22 2020-07-28 什维新智医疗科技(上海)有限公司 Method, apparatus, storage medium, and system for identifying features in ultrasound images
CN111724450A (en) * 2019-03-20 2020-09-29 上海科技大学 Medical image reconstruction system, method, terminal and medium based on deep learning
CN112132833A (en) * 2020-08-25 2020-12-25 沈阳工业大学 Skin disease image focus segmentation method based on deep convolutional neural network
CN112349407A (en) * 2020-06-23 2021-02-09 上海贮译智能科技有限公司 Shallow ultrasonic image focus auxiliary diagnosis method based on deep learning
CN112381178A (en) * 2020-12-07 2021-02-19 西安交通大学 Medical image classification method based on multi-loss feature learning
WO2021088747A1 (en) * 2019-11-04 2021-05-14 中国人民解放军总医院 Deep-learning-based method for predicting morphological change of liver tumor after ablation
CN112927799A (en) * 2021-04-13 2021-06-08 中国科学院自动化研究所 Life cycle analysis system fusing multi-example learning and multi-task depth imaging group
CN112996444A (en) * 2018-08-31 2021-06-18 西诺医疗器械股份有限公司 Method and system for determining cancer molecular subtypes based on ultrasound and/or photoacoustic (OA/US) characteristics
CN113379739A (en) * 2021-07-23 2021-09-10 平安科技(深圳)有限公司 Ultrasonic image identification method, device, equipment and storage medium
CN113408595A (en) * 2021-06-09 2021-09-17 北京小白世纪网络科技有限公司 Pathological image processing method and device, electronic equipment and readable storage medium
CN113592797A (en) * 2021-07-21 2021-11-02 山东大学 Mammary nodule risk grade prediction system based on multi-data fusion and deep learning
CN113706434A (en) * 2020-05-09 2021-11-26 北京康兴顺达科贸有限公司 Post-processing method for chest enhanced CT image based on deep learning
CN113724236A (en) * 2021-09-03 2021-11-30 深圳技术大学 OCT image detection method based on attention mechanism and related equipment
CN113780463A (en) * 2021-09-24 2021-12-10 北京航空航天大学 Multi-head normalization long tail classification method based on deep neural network
CN113838559A (en) * 2021-09-15 2021-12-24 王其景 Medical image management system and method
CN113962978A (en) * 2021-10-29 2022-01-21 北京富通东方科技有限公司 Eye movement damage detection and film reading method and system
CN114091507A (en) * 2021-09-02 2022-02-25 北京医准智能科技有限公司 Ultrasonic focus area detection method and device, electronic equipment and storage medium
CN114171187A (en) * 2021-12-06 2022-03-11 浙江大学 Stomach cancer TNM staging prediction system based on multi-modal deep learning
CN114881925A (en) * 2022-03-30 2022-08-09 什维新智医疗科技(上海)有限公司 Good malignant decision maker of node based on elasticity ultrasonic image
WO2022268231A1 (en) * 2021-06-24 2022-12-29 杭州深睿博联科技有限公司 Method and apparatus for predicting whether lesion is benign or malignant based on decoupling mechanism
CN117218419A (en) * 2023-09-12 2023-12-12 河北大学 Evaluation system and evaluation method for pancreatic and biliary tumor parting and grading stage
CN117952829A (en) * 2024-01-18 2024-04-30 中移雄安信息通信科技有限公司 Image reconstruction model training method, image reconstruction method, device and equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120045128A1 (en) * 2010-08-19 2012-02-23 Sony Corporation Method and apparatus for performing an in-painting process on an image
US20130230247A1 (en) * 2012-03-05 2013-09-05 Thomson Licensing Method and apparatus for multi-label segmentation
CN103632154A (en) * 2013-12-16 2014-03-12 福建师范大学 Skin scar diagnosis method based on secondary harmonic image texture analysis
CN105741251A (en) * 2016-03-17 2016-07-06 中南大学 Blood vessel segmentation method for liver CTA sequence image
CN106056595A (en) * 2015-11-30 2016-10-26 浙江德尚韵兴图像科技有限公司 Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network
CN106204600A (en) * 2016-07-07 2016-12-07 广东技术师范学院 Cerebral tumor image partition method based on multisequencing MR image related information
CN107492105A (en) * 2017-08-11 2017-12-19 深圳市旭东数字医学影像技术有限公司 A kind of variation dividing method based on more statistical informations

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120045128A1 (en) * 2010-08-19 2012-02-23 Sony Corporation Method and apparatus for performing an in-painting process on an image
US20130230247A1 (en) * 2012-03-05 2013-09-05 Thomson Licensing Method and apparatus for multi-label segmentation
CN103632154A (en) * 2013-12-16 2014-03-12 福建师范大学 Skin scar diagnosis method based on secondary harmonic image texture analysis
CN106056595A (en) * 2015-11-30 2016-10-26 浙江德尚韵兴图像科技有限公司 Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network
CN105741251A (en) * 2016-03-17 2016-07-06 中南大学 Blood vessel segmentation method for liver CTA sequence image
CN106204600A (en) * 2016-07-07 2016-12-07 广东技术师范学院 Cerebral tumor image partition method based on multisequencing MR image related information
CN107492105A (en) * 2017-08-11 2017-12-19 深圳市旭东数字医学影像技术有限公司 A kind of variation dividing method based on more statistical informations

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HANYU HONG等: "An improved segmentation algorithm of color image in complex background based on graph cuts", 《22011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND AUTOMATION ENGINEERING》 *
吴西燕等: "基于多特征的目标轮廓跟踪", 《计算机应用研究》 *
钱晓华等: "基于Wasserstein距离的局部能量分割模型", 《电子学报》 *

Cited By (85)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109141847B (en) * 2018-07-20 2020-06-05 上海工程技术大学 Aircraft system fault diagnosis method based on MSCNN deep learning
CN109141847A (en) * 2018-07-20 2019-01-04 上海工程技术大学 A kind of aircraft system faults diagnostic method based on MSCNN deep learning
CN109191425A (en) * 2018-07-23 2019-01-11 中国科学院自动化研究所 medical image analysis method
CN109191425B (en) * 2018-07-23 2022-02-11 中国科学院自动化研究所 Medical image analysis method based on multilayer neural network model
CN112996444A (en) * 2018-08-31 2021-06-18 西诺医疗器械股份有限公司 Method and system for determining cancer molecular subtypes based on ultrasound and/or photoacoustic (OA/US) characteristics
CN109584211A (en) * 2018-10-31 2019-04-05 南开大学 A kind of vision automatic testing method of animal oocyte polar body
CN109685807A (en) * 2018-11-16 2019-04-26 广州市番禺区中心医院(广州市番禺区人民医院、广州市番禺区心血管疾病研究所) Lower-limb deep veins thrombus automatic division method and system based on deep learning
CN109376798B (en) * 2018-11-23 2021-09-24 东南大学 Titanium dioxide lattice phase classification method based on convolutional neural network
CN109376798A (en) * 2018-11-23 2019-02-22 东南大学 A kind of classification method based on convolutional neural networks titanium dioxide lattice phase
CN109394317A (en) * 2018-12-14 2019-03-01 清华大学 Puncture path device for planning and method
CN109686444A (en) * 2018-12-27 2019-04-26 上海联影智能医疗科技有限公司 System and method for medical image classification
CN109785300A (en) * 2018-12-27 2019-05-21 华南理工大学 A kind of cancer medical image processing method, system, device and storage medium
US11742073B2 (en) 2018-12-27 2023-08-29 Shanghai United Imaging Intelligence Co., Ltd. Methods and devices for grading a medical image
US11145405B2 (en) 2018-12-27 2021-10-12 Shanghai United Imaging Intelligence Co., Ltd. Methods and devices for grading a medical image
CN109785306A (en) * 2019-01-09 2019-05-21 上海联影医疗科技有限公司 Organ delineation method, device, computer equipment and storage medium
CN109949271B (en) * 2019-02-14 2021-03-16 腾讯科技(深圳)有限公司 Detection method based on medical image, model training method and device
CN109949271A (en) * 2019-02-14 2019-06-28 腾讯科技(深圳)有限公司 A kind of detection method based on medical image, the method and device of model training
CN109872334A (en) * 2019-02-26 2019-06-11 电信科学技术研究院有限公司 A kind of image partition method and device
CN109949307B (en) * 2019-02-27 2024-01-12 昆明理工大学 Image segmentation method based on principal component analysis
CN109949307A (en) * 2019-02-27 2019-06-28 昆明理工大学 A method of the image segmentation based on principal component analysis
CN109961838A (en) * 2019-03-04 2019-07-02 浙江工业大学 A kind of ultrasonic image chronic kidney disease auxiliary screening method based on deep learning
CN109978841A (en) * 2019-03-12 2019-07-05 北京羽医甘蓝信息技术有限公司 The method and apparatus of whole scenery piece impacted tooth identification based on deep learning
CN109948619A (en) * 2019-03-12 2019-06-28 北京羽医甘蓝信息技术有限公司 The method and apparatus of whole scenery piece dental caries identification based on deep learning
CN109961427A (en) * 2019-03-12 2019-07-02 北京羽医甘蓝信息技术有限公司 The method and apparatus of whole scenery piece periapical inflammation identification based on deep learning
CN111724450A (en) * 2019-03-20 2020-09-29 上海科技大学 Medical image reconstruction system, method, terminal and medium based on deep learning
CN110136103A (en) * 2019-04-24 2019-08-16 平安科技(深圳)有限公司 Medical image means of interpretation, device, computer equipment and storage medium
CN110136103B (en) * 2019-04-24 2024-05-28 平安科技(深圳)有限公司 Medical image interpretation method, device, computer equipment and storage medium
CN111461158A (en) * 2019-05-22 2020-07-28 什维新智医疗科技(上海)有限公司 Method, apparatus, storage medium, and system for identifying features in ultrasound images
CN110348319A (en) * 2019-06-18 2019-10-18 武汉大学 A kind of face method for anti-counterfeit merged based on face depth information and edge image
CN110363226B (en) * 2019-06-21 2024-09-27 平安科技(深圳)有限公司 Random forest-based ophthalmology disease classification and identification method, device and medium
CN110363226A (en) * 2019-06-21 2019-10-22 平安科技(深圳)有限公司 Ophthalmology disease classifying identification method, device and medium based on random forest
CN110348500A (en) * 2019-06-30 2019-10-18 浙江大学 Sleep disturbance aided diagnosis method based on deep learning and infrared thermal imagery
CN110363240B (en) * 2019-07-05 2020-09-11 浙江美迪克医疗科技有限公司 Medical image classification method and system
CN110363240A (en) * 2019-07-05 2019-10-22 安徽威奥曼机器人有限公司 A kind of medical image classification method and system
CN110648311A (en) * 2019-09-03 2020-01-03 南开大学 Acne image focus segmentation and counting network model based on multitask learning
CN110648311B (en) * 2019-09-03 2023-04-18 南开大学 Acne image focus segmentation and counting network model based on multitask learning
WO2021042690A1 (en) * 2019-09-05 2021-03-11 五邑大学 Deep convolution neural network-based breast cancer auxiliary diagnosis method and apparatus
CN110751621A (en) * 2019-09-05 2020-02-04 五邑大学 Breast cancer auxiliary diagnosis method and device based on deep convolutional neural network
CN110751621B (en) * 2019-09-05 2023-07-21 五邑大学 Breast cancer auxiliary diagnosis method and device based on deep convolutional neural network
CN110659692A (en) * 2019-09-26 2020-01-07 重庆大学 Pathological image automatic labeling method based on reinforcement learning and deep neural network
CN110738671A (en) * 2019-10-14 2020-01-31 浙江德尚韵兴医疗科技有限公司 method for automatically segmenting breast calcifications based on deep learning
CN110751203A (en) * 2019-10-16 2020-02-04 山东浪潮人工智能研究院有限公司 Feature extraction method and system based on deep marker learning
CN110796656A (en) * 2019-11-01 2020-02-14 上海联影智能医疗科技有限公司 Image detection method, image detection device, computer equipment and storage medium
US11776120B2 (en) 2019-11-04 2023-10-03 Chinese Pla General Hospital Method for predicting morphological changes of liver tumor after ablation based on deep learning
WO2021088747A1 (en) * 2019-11-04 2021-05-14 中国人民解放军总医院 Deep-learning-based method for predicting morphological change of liver tumor after ablation
CN110910371A (en) * 2019-11-22 2020-03-24 北京理工大学 Liver tumor automatic classification method and device based on physiological indexes and image fusion
CN110859642B (en) * 2019-11-26 2024-01-23 北京华医共享医疗科技有限公司 Method, device, equipment and storage medium for realizing medical image auxiliary diagnosis based on AlexNet network model
CN110859642A (en) * 2019-11-26 2020-03-06 北京华医共享医疗科技有限公司 Method, device, equipment and storage medium for realizing medical image auxiliary diagnosis based on AlexNet network model
CN110930392A (en) * 2019-11-26 2020-03-27 北京华医共享医疗科技有限公司 Method, device, equipment and storage medium for realizing medical image auxiliary diagnosis based on GoogLeNet network model
CN110992338B (en) * 2019-11-28 2022-04-01 华中科技大学 Primary stove transfer auxiliary diagnosis system
CN110992338A (en) * 2019-11-28 2020-04-10 华中科技大学 Primary stove transfer auxiliary diagnosis system
CN111008976B (en) * 2019-12-02 2023-04-07 中南大学 PET image screening method and device
CN111008976A (en) * 2019-12-02 2020-04-14 中南大学 PET image screening method and device
CN111179227B (en) * 2019-12-16 2022-04-05 西北工业大学 Mammary gland ultrasonic image quality evaluation method based on auxiliary diagnosis and subjective aesthetics
CN111179227A (en) * 2019-12-16 2020-05-19 西北工业大学 Mammary gland ultrasonic image quality evaluation method based on auxiliary diagnosis and subjective aesthetics
CN111161256A (en) * 2019-12-31 2020-05-15 北京推想科技有限公司 Image segmentation method, image segmentation device, storage medium, and electronic apparatus
CN111260641A (en) * 2020-01-21 2020-06-09 珠海威泓医疗科技有限公司 Palm ultrasonic imaging system and method based on artificial intelligence
CN111340209A (en) * 2020-02-18 2020-06-26 北京推想科技有限公司 Network model training method, image segmentation method and focus positioning method
CN111402270A (en) * 2020-03-17 2020-07-10 北京青燕祥云科技有限公司 Repeatable intra-pulmonary grinding glass and method for segmenting hypo-solid nodules
CN113706434A (en) * 2020-05-09 2021-11-26 北京康兴顺达科贸有限公司 Post-processing method for chest enhanced CT image based on deep learning
CN113706434B (en) * 2020-05-09 2023-11-07 北京康兴顺达科贸有限公司 Post-processing method for chest enhancement CT image based on deep learning
CN112349407A (en) * 2020-06-23 2021-02-09 上海贮译智能科技有限公司 Shallow ultrasonic image focus auxiliary diagnosis method based on deep learning
CN112132833B (en) * 2020-08-25 2024-03-26 沈阳工业大学 Dermatological image focus segmentation method based on deep convolutional neural network
CN112132833A (en) * 2020-08-25 2020-12-25 沈阳工业大学 Skin disease image focus segmentation method based on deep convolutional neural network
CN112381178A (en) * 2020-12-07 2021-02-19 西安交通大学 Medical image classification method based on multi-loss feature learning
CN112927799A (en) * 2021-04-13 2021-06-08 中国科学院自动化研究所 Life cycle analysis system fusing multi-example learning and multi-task depth imaging group
CN112927799B (en) * 2021-04-13 2023-06-27 中国科学院自动化研究所 Life analysis system integrating multi-example learning and multi-task depth image histology
CN113408595A (en) * 2021-06-09 2021-09-17 北京小白世纪网络科技有限公司 Pathological image processing method and device, electronic equipment and readable storage medium
WO2022268231A1 (en) * 2021-06-24 2022-12-29 杭州深睿博联科技有限公司 Method and apparatus for predicting whether lesion is benign or malignant based on decoupling mechanism
CN113592797A (en) * 2021-07-21 2021-11-02 山东大学 Mammary nodule risk grade prediction system based on multi-data fusion and deep learning
CN113379739B (en) * 2021-07-23 2022-03-25 平安科技(深圳)有限公司 Ultrasonic image identification method, device, equipment and storage medium
CN113379739A (en) * 2021-07-23 2021-09-10 平安科技(深圳)有限公司 Ultrasonic image identification method, device, equipment and storage medium
CN114091507A (en) * 2021-09-02 2022-02-25 北京医准智能科技有限公司 Ultrasonic focus area detection method and device, electronic equipment and storage medium
CN113724236B (en) * 2021-09-03 2023-06-09 深圳技术大学 OCT image detection method and related equipment based on attention mechanism
CN113724236A (en) * 2021-09-03 2021-11-30 深圳技术大学 OCT image detection method based on attention mechanism and related equipment
CN113838559A (en) * 2021-09-15 2021-12-24 王其景 Medical image management system and method
CN113780463B (en) * 2021-09-24 2023-09-05 北京航空航天大学 Multi-head normalization long-tail classification method based on deep neural network
CN113780463A (en) * 2021-09-24 2021-12-10 北京航空航天大学 Multi-head normalization long tail classification method based on deep neural network
CN113962978A (en) * 2021-10-29 2022-01-21 北京富通东方科技有限公司 Eye movement damage detection and film reading method and system
CN114171187A (en) * 2021-12-06 2022-03-11 浙江大学 Stomach cancer TNM staging prediction system based on multi-modal deep learning
CN114881925A (en) * 2022-03-30 2022-08-09 什维新智医疗科技(上海)有限公司 Good malignant decision maker of node based on elasticity ultrasonic image
CN114881925B (en) * 2022-03-30 2024-07-23 什维新智医疗科技(上海)有限公司 Elastic ultrasonic image-based benign and malignant nodule judgment device
CN117218419A (en) * 2023-09-12 2023-12-12 河北大学 Evaluation system and evaluation method for pancreatic and biliary tumor parting and grading stage
CN117218419B (en) * 2023-09-12 2024-04-12 河北大学 Evaluation system and evaluation method for pancreatic and biliary tumor parting and grading stage
CN117952829A (en) * 2024-01-18 2024-04-30 中移雄安信息通信科技有限公司 Image reconstruction model training method, image reconstruction method, device and equipment

Similar Documents

Publication Publication Date Title
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
Tian et al. Multi-path convolutional neural network in fundus segmentation of blood vessels
Jafarzadeh Ghoushchi et al. An extended approach to the diagnosis of tumour location in breast cancer using deep learning
CN112101451B (en) Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block
CN110517238B (en) AI three-dimensional reconstruction and human-computer interaction visualization network system for CT medical image
CN110766051A (en) Lung nodule morphological classification method based on neural network
CN110491480A (en) A kind of medical image processing method, device, electromedical equipment and storage medium
CN114926477B (en) Brain tumor multi-mode MRI image segmentation method based on deep learning
CN106372390A (en) Deep convolutional neural network-based lung cancer preventing self-service health cloud service system
CN110047082A (en) Pancreatic Neuroendocrine Tumors automatic division method and system based on deep learning
CN106096654A (en) A kind of cell atypia automatic grading method tactful based on degree of depth study and combination
CN109447998A (en) Based on the automatic division method under PCANet deep learning model
CN111767952B (en) Interpretable lung nodule benign and malignant classification method
CN105913086A (en) Computer-aided mammary gland diagnosing method by means of characteristic weight adaptive selection
CN105760874A (en) CT image processing system and method for pneumoconiosis
CN112263217B (en) Improved convolutional neural network-based non-melanoma skin cancer pathological image lesion area detection method
Patel Predicting invasive ductal carcinoma using a reinforcement sample learning strategy using deep learning
CN109512464A (en) A kind of disorder in screening and diagnostic system
CN112085113B (en) Severe tumor image recognition system and method
CN114693933A (en) Medical image segmentation device based on generation of confrontation network and multi-scale feature fusion
CN109902682A (en) A kind of mammary gland x line image detection method based on residual error convolutional neural networks
CN114332572B (en) Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network
CN111583385B (en) Personalized deformation method and system for deformable digital human anatomy model
Yonekura et al. Improving the generalization of disease stage classification with deep CNN for glioma histopathological images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 801/802, 8-storey East Science and Technology Building, Building 6, East Software Park, No. 90 Wensan Road, Hangzhou City, Zhejiang Province

Applicant after: Zhejiang Deshang Yunxing Medical Technology Co.,Ltd.

Address before: 310012 Room 709, 710, 7-storey East Building, No. 90 Wensan Road, Xihu District, Hangzhou City, Zhejiang Province

Applicant before: ZHEJIANG DE IMAGE SOLUTIONS CO.,LTD.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20180706

RJ01 Rejection of invention patent application after publication